You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Steve H. <S.W...@ec...> - 2003-08-07 08:42:21
|
On Thu, Aug 07, 2003 at 10:29:12 +0100, Simon Jenkins wrote: > The biggest hole in my argument (that I can see :)) is that module > internal state would often end up in RAM instead of registers in > blockless code. If a group of modules were heavy on internal state > but light on interconnections then the blockless code could lose out. Not really on ia32, there are so few registers that its not likly to make much difference. On PPC it may matter. - Steve |
From: Simon J. <sje...@bl...> - 2003-08-07 08:28:17
|
Steve Harris wrote: >On Thu, Aug 07, 2003 at 01:50:25 +0100, Simon Jenkins wrote: > > >>BTW I'm not making this stuff up out of thin air... there's a fairly >>thorough proof of concept demo at >> >>http://www.sbibble.pwp.blueyonder.co.uk/amble/amble-0.1.1.tar.gz >> >> > >I did a similar thing also, in perl (yuk). > I remember your perl thing! What happened was: I mentioned a pasting-C-together idea in LAD, then found myself actually implementing it (which took a couple of months) to see if it would work. Meanwhile, you did the same thing in a couple of days in perl. However: You+perl aren't *really* 30 times faster than me+C :) A lot of the extra time I'd taken was to make sure that the potential optimisations were actually available in the generated code and visible to the C compiler. I then reproduced your perl demos in amble and found that, indeed, the compiler was able to optimise the amble-generated C code in ways that it could not do with the equivalent perl-generated C code. >I'm not convinced its actually >faster the blocked processing in the general case, but this may be one >where it is. > The biggest hole in my argument (that I can see :)) is that module internal state would often end up in RAM instead of registers in blockless code. If a group of modules were heavy on internal state but light on interconnections then the blockless code could lose out. Simon Jenkins (Bristol, UK) |
From: Steve H. <S.W...@ec...> - 2003-08-07 07:28:06
|
On Thu, Aug 07, 2003 at 01:50:25 +0100, Simon Jenkins wrote: > BTW I'm not making this stuff up out of thin air... there's a fairly > thorough proof of concept demo at > > http://www.sbibble.pwp.blueyonder.co.uk/amble/amble-0.1.1.tar.gz I did a similar thing also, in perl (yuk). I'm not convinced its actually faster the blocked processing in the general case, but this may be one where it is. NB SAOL/sfront uses the same technique, so we could possibly use that, and its language is more appropriate to blockless signal processing than C is. > Its not a true compiler unfortunately: It generates C source code by > pasting together code fragments, each representing a module, into > a single internally blockless (but externally block-processing) > function. Mine also, but the level of blocks that were real C code are very tiny (like + and *) everything else was built out of subgraphs. OTOH the other advantage of blockless processing (very low latency in feedback loops) is pretty irrelevent for samplers. - Steve |
From: Simon J. <sje...@bl...> - 2003-08-06 23:49:31
|
Benno Senoner wrote: >Interesting thoughts Simon, but I am still unsure which approach wins >in terms of speed. > I think you misunderstood what I was suggesting. I'm well aware that you mustn't generate one giant loop with a cache footprint bigger than the actual cache! But its also very inefficient to generate lots of tiny little loops and move data continually from one to the next via buffers when a single loop could have achieved the same computation and still fit in the cache. (Its not the slight overhead of the extra looping that matters. Its that any intermediate values which must cross loop boundaries are forced out of registers and into memory buffers). The trick is to generate loops which are just about the right size: Definitely not too big for the cache but, at the same, not needlessly fragmented into sub-loops that are too small. IMO: The code engine for a single voice is probably just about the right size for a loop, and things would run a lot faster if the code was generated blocklessly *within that loop* than if it was generated as a lot of tiny sub-loops leaving buffers of data in RAM for each other. The fact that a voice is designed by connecting little modules with wires doesn't mean that the compiled code must connect little loops with buffers! Giving each envelope generator, each filter, each LFO its own individual loop won't speed things up... it will slow them down. (At higher levels of granularity than a single voice, eg your "200 voices" example, everything should of course be processed in blocks). BTW I'm not making this stuff up out of thin air... there's a fairly thorough proof of concept demo at http://www.sbibble.pwp.blueyonder.co.uk/amble/amble-0.1.1.tar.gz Its not a true compiler unfortunately: It generates C source code by pasting together code fragments, each representing a module, into a single internally blockless (but externally block-processing) function. I/O is transferred via buffers but internal connections between modules are modelled by local variables, and many of these get optimised away by the C compiler becoming temps in registers as I have been describing. Not only does this work, but it delivers the performance advantages I've been talking about. Its not as good as a true compiler could be, but with a bit of work it could actually be hacked into a quick and dirty code generator for LinuxSampler while we wait for the real compiler to arrive. It really could. Simon Jenkins (Bristol, UK) |
From: Benno S. <be...@ga...> - 2003-08-06 07:08:13
|
Interesting thoughts Simon, but I am still unsure which approach wins in terms of speed. Since the audio processing is block based (we process N samples at time where N is preferably the audio fragment size of the sound card), and since there can be hundreds of active voices, each voice can have it's own modulator. Assume we use blocks of 256 samples and we have 200 voices active. Each voice has an envelope modulation attached to it. This means that during the DSP cycle (that generates the final 256 output samples), 200 envelope generators wrtite 200 * 256 samples = 51200 samples (assuming we want very precise envelope curves thus we allow one new volume value per sample). Since the DSP engine is all float based (4 byte) we end up touching 51200 * 4= 204800 bytes of data. This puts (IMHO) quite some stress on the cache perhaps slowing down things because audio mixing requires lots of cache too. OTOH you say linear events is some form of "compression". Yes it is, but I do not see it as an evil kind of compression since compared to the streamed approach (the envelope generator "streams" the volume data to the audio sampler module) it requires only one more addition which is a very fast operation and whose execution's time probably goes down in noise when compared to the whole DSP network. Perhaps for single sample based processing the streamed approach is the way to go since the data gets consumed immediately but AFAIK on today's CPUs even if you could run an exclusive DSP thread with single sample latency (assume there is no OS in the way that complicated things and you are the only process running) performance would be worse than using block based processing due to worse locality of the referenced data compared to the block model. If I said nonsense or if my approach is flawed performance-wise let me know. cheers, Benno to measure the total CPU usage. Scrive Simon Jenkins <sje...@bl...>: > > > IMHO events are best used for units of "musical meaning" eg the sorts of > things that MIDI encodes moderately well (provided you are a pianist). > That sort of stuff enters a synthesis engine's inputs, may get moved > around and processed a bit, but sooner or later its got to be turned > into something the audio end of the engine can actually *work* with... > a continuous stream of data either at sample-rate or some low-fi > subdivision of it. Why delay the inevitable? Its an envelope-generator's > *job* to turn some events into a data-stream according to some parameters. > > >Of course nothing forbids us to implement that approach too. > >But I think modulation by linear segments is flexible enough > > > Linear segments aren't so much "events" as they are data-compressed > versions of continuous streams. The recipient has to decompress them > back into a stream (probably "on the fly" by interpolating along the > segments as it goes) before modulating any audio with them. > > Its a performance overhead, not a saving, to do... > > events+params -> envelope encoding -> envelope data stream > > rather than directly > > events+params -> envelope data stream > > unless... > > > and is IMHO > >one of the fastest approaches since the amount data moved between modules > >is small. > > > ...you are planning to win back the time by moving less data around. > > However: > > If the synth engines are going to be compiled, then data streams > don't have to be moved anywhere. Nothing except the final audio > outputs ever needs to leave the engine, and the compiler can > generate code which *takes each value from wherever it already is*. > > (If the generated code were internally blockless and reasonably > optimised then the data for a lot of streams would never even > make it to RAM: It would appear in an FPU register as a result > of one FP operation and be almost immediately consumed > from there by a subsequent FP operation.) > > Simon Jenkins > (Bristol, UK) > ------------------------------------------------- This mail sent through http://www.gardena.net |
From: Josh G. <jg...@us...> - 2003-08-06 01:15:24
|
Well after a few days of hacking libInstPatch now loads GigaSampler files and will synthesize to some extent using FluidSynth. Still many things missing from the synthesis conversion to SoundFont of course, but its nice to at least hear something :) I now have a better idea of what GigaSampler files are like, and I don't like them. Seems like they should have just used DLS 2, which is what they walked all over anyways with their file format. I kind of doubt I will support writing to this format with libInstPatch, since I'd rather convert them to something more friendly. If anyone is interested in getting the development branch, it is now somewhat functional. Here are some tips: - Drag and drop Icons to the different panes to set what interface is there (and restart Swami when you take out the tree view since it doesn't have an icon yet :) - Ooo pretty sample viewer (compared to the old one), use mouse wheel to zoom in/out, no other method of zoom at the moment, now that I think about it. Also a vertical (amplitude) zoom. - Keyboard scales too. - Load up some GigaSampler files if you have them and listen to the broken loops (will track down tomorrow, works with single samples) - Have fun and let me know if you actually try it. Remember to check out the swami-1-0 branch of CVS, not the head branch! cvs -z3 -d:pserver:ano...@cv...:/cvsroot/swami co -r swami-1-0 swami Have a look at the Swami download page for more details. Cheers. Josh Green |
From: Simon J. <sje...@bl...> - 2003-08-05 23:59:04
|
Benno Senoner wrote: >For audio it's obvious that it is a continuous stream of data but for >modulation data I was thinking about using events as described above. >That way you avoid wasting memory, bandwidth and cpu cycles doing like other >modular synths where you send modulation data as it were an audio stream (at k >rate like in csound). > IMHO events are best used for units of "musical meaning" eg the sorts of things that MIDI encodes moderately well (provided you are a pianist). That sort of stuff enters a synthesis engine's inputs, may get moved around and processed a bit, but sooner or later its got to be turned into something the audio end of the engine can actually *work* with... a continuous stream of data either at sample-rate or some low-fi subdivision of it. Why delay the inevitable? Its an envelope-generator's *job* to turn some events into a data-stream according to some parameters. >Of course nothing forbids us to implement that approach too. >But I think modulation by linear segments is flexible enough > Linear segments aren't so much "events" as they are data-compressed versions of continuous streams. The recipient has to decompress them back into a stream (probably "on the fly" by interpolating along the segments as it goes) before modulating any audio with them. Its a performance overhead, not a saving, to do... events+params -> envelope encoding -> envelope data stream rather than directly events+params -> envelope data stream unless... > and is IMHO >one of the fastest approaches since the amount data moved between modules >is small. > ...you are planning to win back the time by moving less data around. However: If the synth engines are going to be compiled, then data streams don't have to be moved anywhere. Nothing except the final audio outputs ever needs to leave the engine, and the compiler can generate code which *takes each value from wherever it already is*. (If the generated code were internally blockless and reasonably optimised then the data for a lot of streams would never even make it to RAM: It would appear in an FPU register as a result of one FP operation and be almost immediately consumed from there by a subsequent FP operation.) Simon Jenkins (Bristol, UK) |
From: ian e. <de...@cu...> - 2003-08-05 21:44:55
|
the downcompiling idea is definitely the way to go, but i think it would be much better if it was one of the things that was left until later to develop. i think people would much rather have a working sampler that used more cpu than have to wait until the downcompiler was ready to have anything they could use at all. also, the non-downcompiled network is going to be necessary for synthesis network design, so it won't be wasted effort. just my opinion, but the quicker a useable sampler appears, the better! ian On Tue, 2003-08-05 at 11:10, Benno Senoner wrote: > I wrote a response but I think the mail got lost :-( so I'll try again. > > Scrive Steve Harris <S.W...@ec...>: > > > > > Are we still planning to use downcompiling to make an efficient runtime > > sampler? > > yes. It is not a trivial task but should be manageable. > (some modules require special care like filters that mantain state information > etc). > > > > > Some questions: what does SamplePtr do? > > Basically when a MIDI event occurs the MIDI Keymap module picks the right > sample (key and velocity mapping) and sends pointer to sample data and > sample length to the sample module. > Same must be done for looping etc. > As for modulators I was thinking using events that send (base, increment) > values. That way you can approximate complex curves via small linear segment > while still having a fast engine. > > >And is pitch really pitch, or is it rate. > > Sorry yes I think it is rate. Basically rate = 1.0 plays the sample at > normal speed. rate = 2.0 double speed (one octave up etc). > > > > > I'm afraid I've been out of the loop a bit recently with other > > commitments, but I'l do what I can. > > no problem, I'm in the same situation so I'm the first that has to shut up ! > ;-) > > Benno. > > > > ------------------------------------------------- > This mail sent through http://www.gardena.net > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel -- ian esten <de...@cu...> |
From: Benno S. <be...@ga...> - 2003-08-05 21:13:56
|
Scrive Steve Harris <S.W...@ec...>: > > yes. It is not a trivial task but should be manageable. > > (some modules require special care like filters that mantain state > information > > etc). > > Yes, but if we use LADSPA's API (as was suggested a while back) this > should be a non-problem, cos LADSPA defines how this is handled. Ok but keep in mind that the synthesis network is not composed only of audio modules like LADSPA but modulators, event generators etc thus perhaps ideas can be taken from LADSPA but will probably need to be extended. We will see as we progress. > > > > Some questions: what does SamplePtr do? > > > > Basically when a MIDI event occurs the MIDI Keymap module picks the right > > sample (key and velocity mapping) and sends pointer to sample data and > > sample length to the sample module. > > Same must be done for looping etc. > > As for modulators I was thinking using events that send (base, increment) > > values. That way you can approximate complex curves via small linear > segment > > while still having a fast engine. > > Wouldn't a streaming approach be more appropraite? So the sample source > module streas PCM data to the audio output module. You mean using the straming approach for audio or for modulation data ? For audio it's obvious that it is a continuous stream of data but for modulation data I was thinking about using events as described above. That way you avoid wasting memory, bandwidth and cpu cycles doing like other modular synths where you send modulation data as it were an audio stream (at k rate like in csound). Of course nothing forbids us to implement that approach too. But I think modulation by linear segments is flexible enough and is IMHO one of the fastest approaches since the amount data moved between modules is small. If you meant a different thing please let me know. (or if your method is more efficient than mine) Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
From: Steve H. <S.W...@ec...> - 2003-08-05 16:22:24
|
On Tue, Aug 05, 2003 at 05:10:17 +0200, Benno Senoner wrote: > > Are we still planning to use downcompiling to make an efficient runtime > > sampler? > > yes. It is not a trivial task but should be manageable. > (some modules require special care like filters that mantain state information > etc). Yes, but if we use LADSPA's API (as was suggested a while back) this should be a non-problem, cos LADSPA defines how this is handled. > > Some questions: what does SamplePtr do? > > Basically when a MIDI event occurs the MIDI Keymap module picks the right > sample (key and velocity mapping) and sends pointer to sample data and > sample length to the sample module. > Same must be done for looping etc. > As for modulators I was thinking using events that send (base, increment) > values. That way you can approximate complex curves via small linear segment > while still having a fast engine. Wouldn't a streaming approach be more appropraite? So the sample source module streas PCM data to the audio output module. > >And is pitch really pitch, or is it rate. > > Sorry yes I think it is rate. Basically rate = 1.0 plays the sample at > normal speed. rate = 2.0 double speed (one octave up etc). Yup. NP, I just wanted to be clear. - Steve |
From: Benno S. <be...@ga...> - 2003-08-05 15:10:12
|
I wrote a response but I think the mail got lost :-( so I'll try again. Scrive Steve Harris <S.W...@ec...>: > > Are we still planning to use downcompiling to make an efficient runtime > sampler? yes. It is not a trivial task but should be manageable. (some modules require special care like filters that mantain state information etc). > > Some questions: what does SamplePtr do? Basically when a MIDI event occurs the MIDI Keymap module picks the right sample (key and velocity mapping) and sends pointer to sample data and sample length to the sample module. Same must be done for looping etc. As for modulators I was thinking using events that send (base, increment) values. That way you can approximate complex curves via small linear segment while still having a fast engine. >And is pitch really pitch, or is it rate. Sorry yes I think it is rate. Basically rate = 1.0 plays the sample at normal speed. rate = 2.0 double speed (one octave up etc). > > I'm afraid I've been out of the loop a bit recently with other > commitments, but I'l do what I can. no problem, I'm in the same situation so I'm the first that has to shut up ! ;-) Benno. ------------------------------------------------- This mail sent through http://www.gardena.net |
From: Steve H. <S.W...@ec...> - 2003-08-05 06:17:43
|
On Tue, Aug 05, 2003 at 01:40:47 +0200, Benno Senoner wrote: > I started to write the GUI for the module editor for Linuxsampler. > See this screenshot for now (code will follow later). > > http://www.linuxdj.com/benno/lsgui4.gif Nice, you can never have to many modular synth style interfaces ;) Are we still planning to use downcompiling to make an efficient runtime sampler? Some questions: what does SamplePtr do? And is pitch really pitch, or is it rate. I'm afraid I've been out of the loop a bit recently with other commitments, but I'l do what I can. - Steve |
From: Benno S. <be...@ga...> - 2003-08-04 23:40:36
|
I started to write the GUI for the module editor for Linuxsampler. See this screenshot for now (code will follow later). http://www.linuxdj.com/benno/lsgui4.gif About 1.5 day of coding. (Qt lib) ;) You can create modules with an arbitrary number of input and output ports that can be of several kind of types (midi, audio, control ports etc). You can connect ports that are of the same type and move the modules around the screen. As you can see in the screenshot the idea is to make a very general purpose system which is not tied to MIDI. The screenshot shows that the MIDI keymap module acts as a proxy between the MIDI data and the sampler module itself which knows nothing about MIDI etc. More details about the GUI within the next days ... meanwhile if you have suggestions to make (GUI wise) speak out loudly ;-) here follows a response to Josh's mail: Scrive Josh Green <jg...@us...>: > > If Linuxsampler can be thought of as a network of synthesis modules then > couldn't it be possible to have an API for creating a synthesis model > for a particular instrument format? So say for a SoundFont one would do > something like (sudo code follows): > (....) > > etc.. I am still of the opinion that the instrument patch objects should > be decoupled from the synthesis engine, due to its real time nature. So > a handler for a particular instrument format would then setup the > synthesis model (perhaps the network would be compiled or optimized in > some way). (....) > I'm not sure how efficient this would be in practice but it does fit the > modular goal. The nice thing about this is that other projects could > take direct advantage of Linuxsampler (such as Swami). > > Yes the goal is to design it that way, thus it will be easy to build a powerful SF2 engine too.. Regarding swami / libinstpatch etc: please work together with Christian S. on the DLS / GIG loading stuff since he is interested too. As said LinuxSampler needs some lib that decodes the chunks and presents them in a way that is easy to handle for the engine. The less dependencies it has the easier it is to use the lib within linuxsampler. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
From: Marek P. <ma...@na...> - 2003-08-01 20:24:13
|
> > Some of the developers (Marek ? ;-) ) do not like Qt that much (because it is > C++ based or because Qt is GPL and not LGPL like gtk) Personally i have nothing against Qt being GPL or C++ or whatever, in fact Qt is cool, but i simply like GTK+ better that's all. :) Marek |
From: Josh G. <jg...@us...> - 2003-07-28 18:03:49
|
On Sun, 2003-07-20 at 18:29, Benno Senoner wrote: > Josh is writing LibInstPatch and I think it is a nice lib but I'm unsure if > it is the right way to go. Just wanted to clear up one point that Benno mentioned. I'm curious if anyone has actually looked at libInstPatch (not the one with swami-0.9.x thats totally outdated, but the one in the swami-1-0 development branch, use "co -r swami-1-0 swami" when checking out CVS, explains more on Swami download page). I really think it is a nice library architecture and it is constantly improving. It already does quite a bit of what you have been talking about: IpatchRiffParser - RIFF parser object Patch formats are loaded into multi-threaded safe object trees "named" object properties for easy setting of object values All file IO done via IpatchFile which is a virtual file object Abstract sample storage (RAM, file, swap, libsndfile, etc) Supports DLS and SoundFont files currently, rather trivial to add additional formats. Has a nice GUI already, although incomplete (Swami) Anyways, I'm not sure what the goal is of the LinuxSampler project, but I know I'll be continuing work on my own project. If there is some way in which my work could help the goals of LinuxSampler, I think it would be cool. I don't really feel like I have gotten any feedback on if libInstPatch makes sense or not for this project, so until I do, I will keep pushing it :) Cheers. Josh Green |
From: Josh G. <jg...@us...> - 2003-07-28 14:45:40
|
On Mon, 2003-07-28 at 16:32, Alexandre Prokoudine wrote: > Ahem, doesn't Google rule? :-) > > http://www.midi.org/about-midi/dls/dls2spec.shtml Yes it does, almost as much as the MMA sucks. You can happily download DLS1 in electronic form, unfortunately DLS2 is by order only and they give you a nice printed copy which is much harder to transport on laptop (my copy is in California, while I'm currently in Germany). DLS1 has much in common with DLS2 though. Cheers. Josh Green |
From: Alexandre P. <av...@al...> - 2003-07-28 14:32:28
|
Josh Green wrote > On Sun, 2003-07-27 at 00:59, Christian Schoenebeck wrote: > > > > oI > > IO Yep, count me in! > > > > I hope to see one of you guys in IRC (#lad) today (sunday), so you (perhaps > > Marek?) can tell me what's already done and what we'll still have to do. I > > have e.g. no DLS specs yet, so would be nice if somebody could send it to me. > > > > CU > > Christian > > I'm on #lad right now, since I'm also interested in doing GIG support > with Swami. The DLS loader is already working in libInstPatch, but I'd > like to add the gig extensions. Its somewhat unfortunate that one can > not tell the difference between a .gig and .dls file until running into > one of the .gig specific chunks. They kind of polluted the magic file > name space :) I have DLS specs, unfortunately they are a printed copy. > If someone has an electronic form, I'd also be interested. Cheers. > Josh Green Ahem, doesn't Google rule? :-) http://www.midi.org/about-midi/dls/dls2spec.shtml -- Alexandre Prokoudine ALT Linux Documentation Team JabberID: av...@al... |
From: Josh G. <jg...@us...> - 2003-07-28 13:57:06
|
On Sun, 2003-07-27 at 00:59, Christian Schoenebeck wrote: > > oI > IO Yep, count me in! > > I hope to see one of you guys in IRC (#lad) today (sunday), so you (perhaps > Marek?) can tell me what's already done and what we'll still have to do. I > have e.g. no DLS specs yet, so would be nice if somebody could send it to me. > > CU > Christian I'm on #lad right now, since I'm also interested in doing GIG support with Swami. The DLS loader is already working in libInstPatch, but I'd like to add the gig extensions. Its somewhat unfortunate that one can not tell the difference between a .gig and .dls file until running into one of the .gig specific chunks. They kind of polluted the magic file name space :) I have DLS specs, unfortunately they are a printed copy. If someone has an electronic form, I'd also be interested. Cheers. Josh Green |
From: Christian S. <chr...@ep...> - 2003-07-26 23:00:43
|
Es geschah am Sonntag, 20. Juli 2003 18:29 als Benno Senoner schrieb: > While I would technically able to do such stuff I'd rather prefer to focus > on building an efficient low level engine so it would be handy if some of > you could implement the GIG importing lib. > Any volounteers ? oI IO Yep, count me in! I hope to see one of you guys in IRC (#lad) today (sunday), so you (perhaps Marek?) can tell me what's already done and what we'll still have to do. I have e.g. no DLS specs yet, so would be nice if somebody could send it to me. CU Christian |
From: Josh G. <jg...@us...> - 2003-07-20 21:21:32
|
I've also been thinking more about Swami/libInstPatch and Linuxsampler. I realized that there really is no reason for me to push Swami/libInstPatch for the Linuxsampler project, in fact when I started to think of the modular nature that Linuxsampler aims to be it occured to me that it shouldn't matter. If Linuxsampler can be thought of as a network of synthesis modules then couldn't it be possible to have an API for creating a synthesis model for a particular instrument format? So say for a SoundFont one would do something like (sudo code follows): /* streams and interpolates audio data, loop and tuning parameters */ add_wave_table ("Wavetable") add_lowpass_filter ("Filter") add_envelope ("VolumeEnv") add_envelope ("ModEnv") add_lfo ("ModLFO") add_lfo ("VibratoLFO") add_pan ("Panning") etc.. I am still of the opinion that the instrument patch objects should be decoupled from the synthesis engine, due to its real time nature. So a handler for a particular instrument format would then setup the synthesis model (perhaps the network would be compiled or optimized in some way). A note on handler would then be written which might look like: voice = new_synthesis_voice ("SoundFont") voice.Wavetable.set_sample_callback (my_sample_callback) voice.Wavetable.set_pitch (midi_note) voice.Filter.set_Q (soundfont Q) .. I'm not sure how efficient this would be in practice but it does fit the modular goal. The nice thing about this is that other projects could take direct advantage of Linuxsampler (such as Swami). If anyone cares to check out whats currently happening with the development version of Swami I put up a screenshot. http://swami.sourceforge.net/images/swami_devel_screenshot.jpg You'll notice a few gigasampler files open as well as one DLS file and a SoundFont. Only sample info and splits are viewable with DLS2 files at the moment, and there is still quite a lot of work to do before I would consider it a working instrument editor, but things are progressing quite nicely now :) Of note is that the piano, splits and sample view are now implemented with the GnomeCanvas (of additional note is that GnomeCanvas 2.x is not dependent on Gnome). This means piano and splits can be scaled and controls can be overlayed onto the same canvas. If you would like to try out the development version, let me know. This stuff is so new it hasn't been checked into CVS yet. Cheers. Josh Green |
From: Benno S. <be...@ga...> - 2003-07-20 16:29:15
|
Hello, sorry for being that quiet during the last weeks, work overload you know .. Anyway during the last days a few of us (me, Marek, Juan L. and others) discussed about the possible roadmaps for linuxsampler. While we haven't produced useful code lately, some of us came up with nice ideas. Some say linuxsampler will never happen, or that some do "voice" too much, but hey I think passionate discussion sparks interest and creativity. Plus good things take time to develops and unfortunately we are not in the fortunate situation like Paul D. where we can work full time on our favorite OSS project. That said, personally I'm still convinced that it should become a "modular sampler" because I do not like hardcoded engines that much since they limit the creativity of the user. Obviously should allow importing multiple sampleformats but in this regard there is a opinion split: some (like me) say we should write separate sample loading modules for each format (gig, akai etc) that store the samples in datastructures using the native representation, while others say that it would be better to create some powerful (and if possible extensible) internal format that can accomodate most common formats in order to simplify things. I'm not sure about that and if we really write a modular sampler (where you can assemble the various engines using a GUI editor along the lines of NI Reaktor) if in case of using "native sampleformat loaders" the code duplication would really be that high as some state (Marek). Of course it would be cool to use the GIG engine to play AKAI samples and viceversa. But when using native loaders you could alway write a GIG to AKAI converter module that reads the GIG metadata that the GIG loader put into mem and converts them to the AKAI datastuctures. Of course the "universal internal format" has the advantage that you can convert form any input format to any output format by writing a converter module that can convert from/to the universal to/from the specific format. That way if you have N formats you only need to write N converter modules instead of N*(N-1) if you are using only native formats. I back the native format loader solution since you can tune the engine exactly to work hand in hand with that format instead of going through the intermediate universal format. This mean you can tune the GIG engine to work exactly with GIG specific enveloping data, filters, looping etc. Same applies to all other formats. Of course one would want to use the GIG engine to play AKAI samples but we can write a converter module (that is probably quite simple since AKAI's sample engine is simpler than GIG's one). But I think the N*(N-1) n. of converter modules equation does not apply here since I think it is unlikely to want to play GIGs using the limited AKAI engine. Since we include both engines it makes sense to use the native one. So probably the number of format converter modules is more likely determined by (N of simple formats) *( N of complex formats). Since the N of complex formats (like GIG) is not that high I guess the number of converter modules would still stay quite low but with the benefits of having native loaders and native engines that go hand in hand. Marek would object here but I think using an internal universal format make it hard to avoid conversion losses that makes it hard to achieve faithful playback. Of course many sampleformats share many properties like keymaps, envelopes, filters, looping information etc, but unfortunately often each format uses its own representation where a bijective A<->B converter function does not exist. For example when speaking about envelopes: some formats use simple ADSR models while others allow arbitrary linear or exponential enveloping segments. How to avoid the code duplication in the loaders and engines ? Use the modular approach ! Since the final engines will be compiled using the following approach: module editor -> C source code -> compiled .so file that gets loaded by the sampler the speed will be OK even if we assemble the engines using a Reaktor-like GUI (and it will be very userfriendly and handy to add new custom engines). So basically when we have implemented enough basic building blocks (filters, envelope generators, etc) designing new engines becomes a breeze. Not sure about the loaders (if we could use a GUI editor to compose loaders too). Most sampling formats are based on the RIFF format (GIG, DLS, SF2, WAV etc) so a RIFF parser is needed anyway but the individual modules that interpret the data contained in various RIFF chunks still need be written by hand. But to achieve maximum flexibility instead of writing a monolithic format importer we could write let's say a general RIFF parser and then specialized modules that interpret the various RIFF chunks. That way if we happen to support formats that share some characterisics (eg GIG is nothing more than DLS with some proprietary additions) can can fully reuse the modules. I think using the approaches described above the "code duplication" can be kept really low while the flexibility increases quite. Regarding the GUIs: As you know I support the idea of totally decoupling the engine from the GUI. It improves code quality because you do not mix GUI stuff with engine stuff. Plus if you use a TCP socket you can even run the GUI on a remote machine that can run even another OS than Linux. (eg. assume a Mac OS X frontend box networked with a headless linuxsampler box). As for the module composer form the discussion we had on IRC we seem to agree that the best thing to do would be making a "stupid" GUI which only forward commands to the engine. That way when you wire togheter the basic building blocks in the GUI editor apart from displaying them wired together it does nothing than issuing a connect(module1.out,module2.in) command to the engine which does the appropriate stuff. That way you can even use scripts, commandlines or textual interfaces that can manage and construct audio rendering engines. Regarding the sample importing modules: I think it would be wise to use a bottom up approach thus IMHO it would be better to not use and extend complex monolithic applications that can read/edit samples. Josh is writing LibInstPatch and I think it is a nice lib but I'm unsure if it is the right way to go. I think the only way we can solve the problems that linuxsampler poses is a divide et impera approach. So write many "micro modules" and "wire" them toghether using a GUI. That way you can, as Steve Harris said some time ago, make a good "sampler construction kit". Regarding the code and what to implement first. I admit we haven't done much lately but meanwhile the ideas kept flowing and I think it helps to avoid design errors. Thanks to Sebastien and Christian we now have libakai running on Linux. (libakai is a small lib that reads an parses AKAI sample images). I think this is the right kind of lib for linuxsampler since it is lean, very low level, does not contain a GUI and can easily be embedded into linuxsampler importing modules. So basically we need to figure out which kind of modules we should start implementing. It is not easy to choose where to start since design errors in one module can later affect the design of other modules. So I think it would probably be the best to start with the GUI. A module editor similar to Reaktor where you can assemble basic building blocks. The real DSP network building process will occur within the sampler engine so the GUI will remain quite simple and stupid. I think once the GUI runs we can start to implement the necessary backend. The logic that manages DSP modules networks and then at a later stage the code that turns the DSP network in a C file that can be compiled into a .so file that runs the actual DSP stuff. I'm using Qt and will post a few screenshots for commenting in a few weeks. At least we will have some code and stuff to play with. The Qt Designer has the nice capability to allow the integration of custom made Widgets which can be used to build more complex ones. That way you can for example make audio in, audio out, midi in,midi out, CV in, CV out widget which you can use to make new basic building blocks. Some of the developers (Marek ? ;-) ) do not like Qt that much (because it is C++ based or because it Qt si GPL and not LGPL like gtk) but that is not a problem. GUIs can be implemented by using any toolkit (even curses etc) since the engine it totally decoupled from the GUI. Personally I like Qt very much because I think GUIs can be implemented easier in an object oriented language rather than in C plus Qt classes are IMHO much cleaner (and easier to use) than Gtk/gtkmm ones. It is just a personal taste so if at a later stage someone wants to implement a gtk based GUI feel free to do so. The engine just does not care what kind of GUI drives it. Returning to the sample importer modules: as said before it would be nice to support the AKAI (via libakai) and GIG formats. GIG is a bit more tricky since we need to decode the GIG specific chunks. Ruben Van Royen has done some good stuff some ago (Paul K. and Marek do have Rubens header files) but it is still not complete. So one task that it open to some of you developers (the ones that do have gigasampler) is to continue that stuff (contact Paul K. or Marek for more infos). So it would be cool if you could produce a small and lean libgig that does something like libakai does: parse and read GIG files, decode the various chunks (envelope data, filters, looping data, MIDI keymaps) etc. While it is not a really hard task, it is not very easy either. You need a running gigasampler, some sample libraries. Decode data, change parameters, see what changes in the data chunks on the GIG file etc. Not to mention that you need a copy of the DLS2 specs (since GIG is based on DLS2) which are not publicy available on the net. (but some on the list like Paul K seem to have a copy so just ask him how to obtain one). While I would technically able to do such stuff I'd rather prefer to focus on building an efficient low level engine so it would be handy if some of you could implement the GIG importing lib. Any volounteers ? Such a lib would not only benefit linuxsampler but other OSS projects too. (sample editors, softsamplers etc). Ok shutting up now ;-) I'd like you guys to comment on the various issues (if my reasoning is flawed, what you would do in different ways etc). cheers, Benno http://linuxsampler.sourceforge.net ------------------------------------------------- This mail sent through http://www.gardena.net |
From: Josh G. <jg...@us...> - 2003-07-01 15:23:59
|
On Mon, 2003-06-30 at 16:33, Christian Schoenebeck wrote: > > > > Sure, but not portable and some strange formats might require low level > > hardware control. Once again I'm not opposed to this. <-- ***** #### > > I don't see your problem with the low level hw acess josh. I wrote "Once again I'm not opposed to this", which I thought would have conveyed that I don't have a problem with it. > Maybe that's the best for the moment to leave libinstpatch as is and just > writing a C++ frontend. > > But I will never understand why people still stick with C... Understandable. I had thought of C++ at the time when I was trying to decide what to re-write Swami with. As to how good my decision was with using GObject, that remains to be seen. Essentially I wanted to try to program something that could be used by everyone, and so C seemed the most logical choice. I like the relation that GTK+/GObject/glib has, although I didn't really compare that with C++ toolkits. I don't think GTKmm for GTK2 was anywhere near where it is now, so thats why I choose GObject. If I had more knowledge of C++ and what can and can't be done, perhaps I would have chosen otherwise, but for now I think to start re-writing everything in C++ would be a mistake. Perhaps I may undertake this in the future, but things are already object oriented, so I just have to live with some of the excess C object fluff, and hope that if I do recode it in C++ it won't be too much of an architecture change. > > > JACK :) > > > > Of course. But I also like the abstraction that FluidSynth has, although > > SoundFont 2 centric, you can load arbitrary patch formats as long as > > they somewhat conform to the SF2 model. > > And that's the problem. >From the view of linuxsampler, yes, for FluidSynth, not a problem, but a design decision. Many patch formats do fit well within the SoundFont model, at least DLS2 and other simple formats like GUS. I'm not completely familiar with gigasampler yet, but from what I've seen probably a good portion of its synthesis model could fit into SoundFont as well. > > I realize that there is no reason for me to push libInstPatch/Swami > > stuff here if no one wants it, I'd rather be doing some actual work. > > libinstpatch with a C++ frontend sounds good and should IMO be used in > LinuxSampler, but I think it would be better to write a GUI from scratch. I > would propose an observer model encapsulating the logic of the gui (e.g. > network connection to LinuxSampler host, sample editing) and then using > simple plugins for the actual visual part, so you can choose between gtk, > qt,... Swami is becoming pluggable in regards to the GUI. I don't want to go into it too much yet, since its not at a stage where one can actually use it. Getting very very close though. Never underestimate how long things can take, is what I try and remind myself of. > No one talked about extending libakai. That's the least problem, because these > couple of lines can easyily incorpareted anywhere. I just thought a general > API for the various formats would be nice, but without your support Josh it > doesn't make sense, so just using your C++ bindings for libinstpatch might be > the best. > I've been thinking about this a bit more. For right now I'm going to keep concentrating on Swami/libInstPatch development. This is the best use of my time, I think. Once the new development code is usuable as a program then I'd feel more comfortable returning to the subject as to whether linuxsampler should make use of it or not. I definitely think Swami has many similar goals as linuxsampler, mainly in creating a generic instrument patch editing and composition environment. I have no intentions of doing synthesis though, and thought that was going to be linuxsampler's main focus. That is why I saw it as potentially 2 projects that could work happily together. It may end up being that my choice of using C was a bad one in regards to interfacing with other people and projects, but I still haven't come to this conclusion yet. BTW I did look into Gtkmm to try and get an idea of what a C++ binding would entail. The process of creating a binding for GObjects for C++ looks similar to the procedure for Python bindings, even uses some of the same tools. So perhaps it may be quite easy to create a C++ binding. > > Best regards, > Christian > Cheers. Josh |
From: Christian S. <chr...@ep...> - 2003-06-30 14:37:33
|
Es geschah am Montag, 30. Juni 2003 15:05 als Josh Green schrieb: > On Mon, 2003-06-30 at 16:37, Marek Peteraj wrote: > > Hi Josh, Hi everybody! > > AFAIK it makes no difference under linux, you can open a /dev/hd- or a > > /dev/scdx just like a file. > > Sure, but not portable and some strange formats might require low level > hardware control. Once again I'm not opposed to this. I don't see your problem with the low level hw acess josh. That's just a matter of encapsulation. Define a stream class, put the low level code in there and let the rest of the lib just use the methods of this stream class. At the moment the whole thing will be used under Linux only anyway and if some day somebody want to use it under OSwhatever just add the low level code into your stream class. You could even decide one day to completely drop the low level approach at all for POSIX systems and do a simple f = fopen(/dev/cdrom); That way, no matter what you do, it won't affect the code of your lib at all except the few lines in your stream class. > > But instead of rewriting it as C++, woudln't it be better to write a new > > one in collaboration with others here(Christian, Paul, Marc, me ...) and > > design it with respect to other popular formats? The main benefit would > > be the experience other people have with them. > > No no.. Not re-writing it, having a binding for it. This is what a lot > of the GObject based C libraries have that are related to GTK+ and such. > Thats the plus I think with C, its the least common denominator. Maybe that's the best for the moment to leave libinstpatch as is and just writing a C++ frontend. But I will never understand why people still stick with C... > > JACK :) > > Of course. But I also like the abstraction that FluidSynth has, although > SoundFont 2 centric, you can load arbitrary patch formats as long as > they somewhat conform to the SF2 model. And that's the problem. > I realize that there is no reason for me to push libInstPatch/Swami > stuff here if no one wants it, I'd rather be doing some actual work. libinstpatch with a C++ frontend sounds good and should IMO be used in LinuxSampler, but I think it would be better to write a GUI from scratch. I would propose an observer model encapsulating the logic of the gui (e.g. network connection to LinuxSampler host, sample editing) and then using simple plugins for the actual visual part, so you can choose between gtk, qt,... > What linuxsampler I think needs most right now is cooperation and > activity. Go extend libakai if you want, I know that my time would be > more efficiently used by working on libInstPatch. Once you have > something, we can all decide what makes the most sense for inclusion in > linuxsampler. No one talked about extending libakai. That's the least problem, because these couple of lines can easyily incorpareted anywhere. I just thought a general API for the various formats would be nice, but without your support Josh it doesn't make sense, so just using your C++ bindings for libinstpatch might be the best. Best regards, Christian |
From: Josh G. <jg...@us...> - 2003-06-30 14:29:02
|
On Mon, 2003-06-30 at 18:09, Marek Peteraj wrote: > > What linuxsampler I think needs most right now is cooperation and > > activity. > > That was exactly my point(a new lib). > I don't think activity necessarily means re-writing instrument libraries for which there already exists code for. Maybe collecting some synthesis code together would be more productive? Brainstorming about the various synthesis elements in different instrument patch formats and how to program these and link them together when it comes to actually synthesizing them. From my point of view, the instrument loading/saving library (whatever it happens to become) should have its objects separate from what the synthesizer is actively working on. This is because in an editing environment the patch objects will be accessed from GUI elements and other things that are inherently slow and not as real time critical as the synthesis output. So other thoughts about what process occurs to get the relevent parameters from the patch objects to the synthesis elements, etc. Anyways, just blabbing, and I did kind of want that previous email thread to end, so... Josh |
From: Marek P. <ma...@na...> - 2003-06-30 13:48:48
|
> I realize that there is no reason for me to push libInstPatch/Swami > stuff here if no one wants it, I'd rather be doing some actual work. The > reality of it is though, this is work that is already existing and I am > someone who has a lot of time on his hands and am willing to help. > You seem to be voicing stuff as if its the voice of linuxsampler, I'm not. I'm just throwing in my ideas. > but I would like to hear opinions from others of this project. Me too. > What linuxsampler I think needs most right now is cooperation and > activity. That was exactly my point(a new lib). Marek |