You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Phil K. <phi...@el...> - 2002-11-08 12:20:41
|
The MIDI spec defines the bit pattern as one start bit, eight data bits and one stop bit with a transfer rate of 31.25 (+/- 1%) Kbaud, hence 10 bits per message byte. Although this speed is the official spec I've found that ALSA, and a number of MIDI cards, do not seem to throttle data back to the spec and will pass data though the interface at reception speed. Although this is a wider problem for me regarding DMIDI and hardware it's potentially a good thing when we are dealing with software based D/MIDI applications (10 Gbit MIDI :). So regarding MIDI timing I get the impression that PC host based interfaces cannot guarantee the 31.25k rate, but testing is not conclusive. -P On Fri, 2002-11-08 at 11:29, chr...@ep... wrote: > >Further to this, serial MIDI only runs at 31.250 kbaud, which means that > >the maximum resolution of a MIDI note on is 42 samples at 44.1kHz (30 bits > >at 31250bits/s). > > > >Obviously things like OSC have greater time resolution, but it shows that > >sample accurate note triggering isn't essential. It may something worth > >droppping for efficiency. > > I definitely agree with that, as notes will be triggered by pure MIDI > devices in most cases anyway. And even if not, sample accurate triggering > is just overkill. > > I think doubling MIDIs frequency (in case of note on that would be > 2*1067Hz = 2133Hz) would be enough for the internal rate. That way you > preserve at least MIDIs resolution, no? > > (BTW note on has 3 Bytes, what are the remaining 6 Bits for? CRC?) > > > > > > > > > ________________________________________ > Online Fotoalben - jetzt kostenlos bei http://www.ePost.de > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: <chr...@ep...> - 2002-11-08 11:29:56
|
>Further to this, serial MIDI only runs at 31.250 kbaud, which means that= >the maximum resolution of a MIDI note on is 42 samples at 44.1kHz (30 bi= ts >at 31250bits/s). > >Obviously things like OSC have greater time resolution, but it shows tha= t >sample accurate note triggering isn't essential. It may something worth >droppping for efficiency. I definitely agree with that, as notes will be triggered by pure MIDI devices in most cases anyway. And even if not, sample accurate triggering= is just overkill. I think doubling MIDIs frequency (in case of note on that would be 2*1067Hz =3D 2133Hz) would be enough for the internal rate. That way you preserve at least MIDIs resolution, no? (BTW note on has 3 Bytes, what are the remaining 6 Bits for? CRC?) ________________________________________ Online Fotoalben - jetzt kostenlos bei http://www.ePost.de |
From: Benno S. <be...@ga...> - 2002-11-08 00:32:04
|
On Fri, 2002-11-08 at 01:14, Josh Green wrote: Hi Josh, I do agree on all your points. Indeed Fluid could provide a solid base for SoundFont playback and it would be cool if we could collaborate with the Fluid developers to produce something that becomes really good and powerful. Regarding an instrument editor, we strongly need such a beast and you are probably the most expert in that field. I'm grateful that you collaborate on the project since without a good editor it will be hard to create new patches without resorting to windows software. I hope that swami becomes really flexible and that in near future it will let you build and edit DLS2 , GIG and other instrument types. Of course swami needs a powerful playback engine in order to provide what you hear is what you get feel. This means that ideally the instrument's editor and playback engine should go hand in hand so that the editor can match the engine's capabilities and viceversa. cheers, Benno > > > > My main interest, as far as contribution to LinuxSampler, is concerning > patch manipulation and GUI editor front end. I think the goals of Swami > fit in with this, and welcome any comments agreeing or disagreeing with > that. I'm not a direct developer of FluidSynth (beyond tracking down > bugs and suggesting things), so I can't really answer specifics about > it. > > I was not necessarily suggesting that LinuxSampler should be based off > of FluidSynth, only that there is probably a bit of information that > could be gained from existing projects such as that one. > > As to performance, I'm not really familiar with Timidity's SoundFont > capabilities. It seems a bit crude to me to just compare them side by > side without knowing what features are enabled. For instance Timidity > might not have Reverb/Chorus enabled by default. There have also been > some recent gains in performance in CVS. That being said, I really > should check out Timidity's capabilities sometime (I have never actually > had it working, although admittedly I have not tried for a while). > > While performance with FluidSynth leaves a lot to be desired, it does > have a fairly complete implementation of the SoundFont specification > (still missing some things though). > > > > > cheers > > Benno > > > > Cheers. > Josh Green -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Josh G. <jg...@us...> - 2002-11-08 00:13:00
|
On Thu, 2002-11-07 at 08:14, Benno Senoner wrote: > > Steve, Josh: > Regarding the IIWU synth I tried it today in conjunction with with the > large fluid sound font: > http://inanna.ecs.soton.ac.uk/~swh/fluid-unpacked/ > > IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. > I took a look at the voice generation routines and it is all pretty much > "hard coded" plus it uses integer math (integer,fractional parts etc) > which is not that efficient as you might think. > I tested it on my dual Celeron 366 and when playing MIDI files it often > cannot keep up because the CPU load goes to 100%. > The same MIDI file played in Timidity on the same box works ok without > drop outs. > I do not want to criticize IIWU here, I think the authors have done > quite a nice work but I don't see it suitable as base for our "sampler > constrution kit" or like Steve H. said "maybe the question should be > whether it's easier to add disk streaming to FluidSynth". > I know some of you want quick results or say "if we set the target too > high we will not reach it and developers might loose interest etc", > but I think the open source world still lack a very well thought out, > flexible and efficient sampling engine and this takes some time. > Sure, we can learn a lot from Fluid, perhaps turning it into a SoundFont > playback module for LinuxSampler but I do not envision LinuxSampler as > an extension of Fluid. > My main interest, as far as contribution to LinuxSampler, is concerning patch manipulation and GUI editor front end. I think the goals of Swami fit in with this, and welcome any comments agreeing or disagreeing with that. I'm not a direct developer of FluidSynth (beyond tracking down bugs and suggesting things), so I can't really answer specifics about it. I was not necessarily suggesting that LinuxSampler should be based off of FluidSynth, only that there is probably a bit of information that could be gained from existing projects such as that one. As to performance, I'm not really familiar with Timidity's SoundFont capabilities. It seems a bit crude to me to just compare them side by side without knowing what features are enabled. For instance Timidity might not have Reverb/Chorus enabled by default. There have also been some recent gains in performance in CVS. That being said, I really should check out Timidity's capabilities sometime (I have never actually had it working, although admittedly I have not tried for a while). While performance with FluidSynth leaves a lot to be desired, it does have a fairly complete implementation of the SoundFont specification (still missing some things though). > > cheers > Benno > Cheers. Josh Green |
From: Benno S. <be...@ga...> - 2002-11-07 23:52:35
|
Hi Frank, thanks for the AKAI FTP link. The other day I was discussing with Steve H. about how to dump AKAI CDs on disk. One way could be a raw dump of the CD image and the other dumping the samples and the program information. Perhaps we should support both in order to make everyone happy. (quite easy to implement) Regarding the large AKAI setup memory requirements ... as said the opnions on this list are mixed, some say to keep it all in RAM others say stream from disk in order to allow working with a larger number of samples. Since the RAM sample module will differ only very little from the disk based sample module we could for example provide both where the disk based version provides less features (eg noo loop point modulation, reverse loops etc). Benno On Thu, 2002-11-07 at 01:03, Frank Neumann wrote: > > First of all, hi :-). I joined the list last week but only lurked so far (and I > am not sure if I'll able to contribute a lot to this project, but..oh well). > http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-07 23:09:51
|
On Thu, Nov 07, 2002 at 07:32:16 +0100, Benno Senoner wrote: > The jitter correction just ensures that the delay will be constant, > exactly one fragment. OK, I'm not sure that is always what you want, but it is a minor issue, and easy to experiment with. > Plus when driven from a sequencer, the sampler can provide sample > accurate audio rendering. (but in that case a time stamped protocol is > probably needed, time stamped MIDI anyone ? ) Yes, I agree here, but time stamped MIDI sounds nasty :) > > Yes, though generally the CV signals run at a lower rate than the audio > > signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing > > pitch data at 1/4 audio rate is more than enough and will save cycles. > > Yes this is a good idea ... perhaps allowing variable CV granularity > or better to run at fixed 1/4 audio rate ? Better to have it variable, or at least in a macro I think. In csound it is selectable per orc file IIRC. > > As long as the compiler issues the branch prediction instruction correctly > > (to hint that the condition will be false), it will be fine. You can check > > this by looking at the .s output. > > How do you check this ? In PIII+ there is an instruction that gets issued, I think. Its one of the things they improved in gcc3.2. The compilers defualt should go the right way anyway. > > If you are refering to phase pointers, then its not an efficientcy issue, > > if you use floating point numbers then the sample will play out of key, > > only slightly, but enough that you can tell. > > Out of key because using 32bit floats does provide only a 24bit mantissa > ? > In my proof of concept code I use 64bit float for the playback pointers > and it works flawlessly even with extreme pitches. OK, doubles should be OK, but it seems a bit wasteful to use doubles for this. Maybe not. [in process] > Ok, at least the engine is designed to work that way (so I guess for > maximum performance some extensions for JACK will be required but I > assume that will not be a big problem) No, it shouldn't be too bad. > > Obviously things like OSC have greater time resolution, but it shows > >that sample accurate note triggering isn't essential. It may something > > worth dropping for efficiency. > > Steve, using your own words, for the efficiency "nazis" we could always > tell > the signal recompiler to #undef the event handling code and compile an > event less version. ;) Yes, absolutly, this is what I meant. The system should know whether its expecting timestamps or not. - Steve |
From: Benno S. <be...@ga...> - 2002-11-07 18:21:29
|
On Thu, 2002-11-07 at 17:43, Steve Harris wrote: > > For example without timestamped events, when using not so small buffer > > sizes, note starts/stop would get quantized to buffer boundaries > > introducing unacceptable timing artifacts. > > That is true in principle, but bear in mind that in many cases the note on > events will be coming in over real time MIDI, and therefore will have to > be processed as soon as posible ie. at the start of the next process() > block. Do not understimate these artifacts, perhaps they go unnoticed with 1 msec audio fragments, but when you drive up the buffer size things are beginning sound weird. And think about the fact that the event can come just a few samples before the current fragment gets to the audio output. This means that the almost one-full-fragment-long-delay will occur anyway. The small delay will go unnoticed. The jitter correction just ensures that the delay will be constant, exactly one fragment. You can calculate the needed delay by looking at either the soundcard's frame pointer or alternatively using gettimeofday() / RDTSC and scale it to the sample rate (44100 nominal or for even more precision calibrate it to the real sample rate) Read this paper in order to convince yourself that the jitter correction is needed in order to provide excellent timing. http://www.rme-audio.de/english/techinfo/lola_latec.htm The price to pay is very low since it involves just a few calculations outside the innermost loop. Plus when driven from a sequencer, the sampler can provide sample accurate audio rendering. (but in that case a time stamped protocol is probably needed, time stamped MIDI anyone ? ) > > Yes, though generally the CV signals run at a lower rate than the audio > signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing > pitch data at 1/4 audio rate is more than enough and will save cycles. Yes this is a good idea ... perhaps allowing variable CV granularity or better to run at fixed 1/4 audio rate ? > > > Or perhaps both models can be used ? (a time stamped event that supplies > > I think a mixture ofhte two is neccesary. I believe that too,but I'd like to hear more opinions. > > > That way we save CPU time and can avoid the statement > > if(sample_ptr >= loop_end_pos) reset_sample_ptr(); > > within audio rendering loop. > > As long as the compiler issues the branch prediction instruction correctly > (to hint that the condition will be false), it will be fine. You can check > this by looking at the .s output. How do you check this ? I mean probably there will be a cmp (compare statement) followed by a conditional jump (jge) , how can the compiler issue branch prediction instructions on x86 ? I thought it is the task of the CPU to figure it out ? > > > IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. > > I took a look at the voice generation routines and it is all pretty much > > "hard coded" plus it uses integer math (integer,fractional parts etc) > > which is not that efficient as you might think. > > If you are refering to phase pointers, then its not an efficientcy issue, > if you use floating point numbers then the sample will play out of key, > only slightly, but enough that you can tell. Out of key because using 32bit floats does provide only a 24bit mantissa ? In my proof of concept code I use 64bit float for the playback pointers and it works flawlessly even with extreme pitches. (I think that with 48bit mantissa it would take quite a lot of errors adding in order to notice that the tune changes). Was this the issue or am I missing something ? > > > Regarding the JACK issues that Matthias W raised: > > I'm no JACK expert but I hope that JACK supports running a JACK client > > directly in it's own process space as it were a plugin. > > This would save unnecessary context switches since there would be only > > one SCHED_FIFO process runnnig. > > (Do the JACK ALSA I/O modules work that way ? ) > > Yes, but there is currently no mechanism for loading an in process client > once the engine has started, however that is merely because the function > hasn't been written. Both the ALSA and Solaris i/o clients are in process, > but they are loaded when the engine starts up. Ok, at least the engine is designed to work that way (so I guess for maximum performance some extensions for JACK will be required but I assume that will not be a big problem) > Further to this, serial MIDI only runs at 31.250 kbaud, which means >that the maximum resolution of a MIDI note on is 42 samples at 44.1kHz >(30 bits> at 31250bits/s). > Yes with 32 sample fragments (my beloved 2.1 msec latency case) you can match midi resolution with at-block-boundary rendering. I think people will want to use bigger buffer sizes too perhaps they want to increase performance or because the particular hardware can't cope with such small buffers. Plus there is the offline audio rendering or from sequencer where sometimes sample accurate rendering can help avoid flanging effects due to small delays in triggering similar or equal waveforms. The price to pay for handling the sample accurate events is quite low because when there are no events pending no CPU is wasted within the innermost loop. the usual way to handle time stamped events within an audio block while(num_samples_before_event=event_pending()) { process(num_samples_before_event); handle_event(); } process(num_samples_after_event); of course event_pending(), process() will probably be inlined macros in order to provide maximum performance, but as you can see, the only overhead over an event-less system is the checking of event_pending() at the beginning of the block and after an event has occurred because it could be that more than one event per fragment occurs. When using lock free fifos or linked lists (probably the lock free fifo is more efficient since it allows asynchronous insertion by other modules) you simply check the presence of an element within the structure which usually involves checking a pointer or doing a subtraction. (lock free fifo) .. not a big deal especially since it lies outside the innermost loop. > Obviously things like OSC have greater time resolution, but it shows >that sample accurate note triggering isn't essential. It may something > worth dropping for efficiency. Steve, using your own words, for the efficiency "nazis" we could always tell the signal recompiler to #undef the event handling code and compile an event less version. :-) I think the recompiler has many advantages like easily being able to provide "simplified" instruments where you do not include LP filters, envelopes etc. Eg in the cases you need only sample playback without any post processing, leave out all DSP stuff and you get an instrument that is faster than the "standard" ones while it does exactly what you want. Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-07 17:02:09
|
> Regarding the note On/Off triggers I would propose to use timestamped > events which allow sample accurate triggering. > For example without timestamped events, when using not so small buffer > sizes, note starts/stop would get quantized to buffer boundaries > introducing unacceptable timing artifacts. Further to this, serial MIDI only runs at 31.250 kbaud, which means that the maximum resolution of a MIDI note on is 42 samples at 44.1kHz (30 bits at 31250bits/s). Obviously things like OSC have greater time resolution, but it shows that sample accurate note triggering isn't essential. It may something worth droppping for efficiency. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-07 16:43:22
|
On Thu, Nov 07, 2002 at 05:14:46 +0100, Benno Senoner wrote: > I'd like you to comment about the design of a simple RAM sample playback ... > At a later stage we can introduce the disk based sample playback module > which will act in a similar way as the RAM version but with some > limitations (eg loop point modulation will not be possibile etc) Good :) > Regarding the note On/Off triggers I would propose to use timestamped > events which allow sample accurate triggering. > For example without timestamped events, when using not so small buffer > sizes, note starts/stop would get quantized to buffer boundaries > introducing unacceptable timing artifacts. That is true in principle, but bear in mind that in many cases the note on events will be coming in over real time MIDI, and therefore will have to be processed as soon as posible ie. at the start of the next process() block. It could be that there will also be non realtime triggering, but if we know there isn't, only handling note on at the start of process() is signiciantly more efficient. If you think about it, handling timestamped events is a pathological case of the branch you describe later. Its worse because the branch predictor can't know which way to go. > In the latter case it is perhaps more efficient to provide a continuous > stream of control values (one for each sample) (Steve does CV models > work like that ?) Yes, though generally the CV signals run at a lower rate than the audio signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing pitch data at 1/4 audio rate is more than enough and will save cycles. > Or perhaps both models can be used ? (a time stamped event that supplies I think a mixture ofhte two is neccesary. > That way we save CPU time and can avoid the statement > if(sample_ptr >= loop_end_pos) reset_sample_ptr(); > within audio rendering loop. As long as the compiler issues the branch prediction instruction correctly (to hint that the condition will be false), it will be fine. You can check this by looking at the .s output. > IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. > I took a look at the voice generation routines and it is all pretty much > "hard coded" plus it uses integer math (integer,fractional parts etc) > which is not that efficient as you might think. If you are refering to phase pointers, then its not an efficientcy issue, if you use floating point numbers then the sample will play out of key, only slightly, but enough that you can tell. > I know some of you want quick results or say "if we set the target too > high we will not reach it and developers might loose interest etc", > but I think the open source world still lack a very well thought out, > flexible and efficient sampling engine and this takes some time. I think the starting point of a simple RAM based playback module is good. It alays my fears about this. > Regarding the JACK issues that Matthias W raised: > I'm no JACK expert but I hope that JACK supports running a JACK client > directly in it's own process space as it were a plugin. > This would save unnecessary context switches since there would be only > one SCHED_FIFO process runnnig. > (Do the JACK ALSA I/O modules work that way ? ) Yes, but there is currently no mechanism for loading an in process client once the engine has started, however that is merely because the function hasn't been written. Both the ALSA and Solaris i/o clients are in process, but they are loaded when the engine starts up. - Steve |
From: Benno S. <be...@ga...> - 2002-11-07 16:04:01
|
Hi, (please read carefully, long mail :-) ) I'd like you to comment about the design of a simple RAM sample playback module that can later be incorporated into the signal path compiler. (Of course many other modules like FXes, modulators etc will be required for a real sample playback engine but the sampler module is fundamental here and it is very important that it is well designed, efficient and provides a good audio quality. At a later stage we can introduce the disk based sample playback module which will act in a similar way as the RAM version but with some limitations (eg loop point modulation will not be possibile etc) Now to my RAM sample module proposal, first see this simple diagram: http://linuxsampler.sourceforge.net/images/ramsamplemodule1.png Basically it contains no MIDI or keymapping capabilities because this will be delegated to other modules. The module allows note start/stop triggering , processing of looping information via lists of looping segments so that you can loop the sample in very flexible ways. Being RAM based, one could even modulate the loop points without any problem. Regarding the note On/Off triggers I would propose to use timestamped events which allow sample accurate triggering. For example without timestamped events, when using not so small buffer sizes, note starts/stop would get quantized to buffer boundaries introducing unacceptable timing artifacts. Where I am not so sure about using timestamped events or not is when modulating the pitch. The problem is we do need to handle both real time pitch messages like those generated by the pitch-bend MIDI controller and at the same time allow the pitch of the sampler module getting modulated by elements like LFOs, envelopes etc. In the latter case it is perhaps more efficient to provide a continuous stream of control values (one for each sample) (Steve does CV models work like that ?) Or perhaps both models can be used ? (a time stamped event that supplies an array of control values (eg bufsize=64 samples and we want to modulate the pitch in a continuous way so we simply generate an event that supplies 64 control values (floats) at the beginning of the processing of each fragment. This is a very important issue and I'd like the experts out here to give us the right advices. Another important issue is how to process looping information efficiently: --------|-------------------------|---- | | | start playback ptr end basically we need to figure out when the playback ptr goes past the loop end point and reset it to the loop start position. assume that the playback ptr gets increased by the pitch value during each iteration of the audio rendering loop. (pitch=1.0 sample gets played at nominal speed pitch=2.0 sample played one octave higher etc). if pitch remains constant for the entire audio fragment we can figure out at which iteration the playback ptr needs to be reset to the loop start position. That way we save CPU time and can avoid the statement if(sample_ptr >= loop_end_pos) reset_sample_ptr(); within audio rendering loop. When assuming that the pitch gets only changed by MIDI pitch bend events the above event based model works well since the pitch remains constant between two events. The problems arise when we let the pitch getting modulated with single sample resolution by other modules like LFOs and envelopes. Generating an event for each sample is too heavy in terms of CPU cycles and since external modules can modulate the pitch in an almost arbitray way, it becomes hard to estimate when the sample playback ptr needs to be reset to the loop start position. I see 3 solutions for this problem (I hope that you guys can come up with something more efficient if it exists): a) preserve the if(sample_ptr >= loop_end_pos) ... statement within the audio rendering loop , waste a bit of CPU but allow arbitrary pitch modulation, regardless if it is event based or driven by continuous values. b) limit the upward pitch modulation to let's say +5 octaves from the root note. (max pitch=32). That way cou can estimate when you will need to start to check if the loop end point was reached. assuming cycles_to_go = (loop_end_pos - playback_pos)*pitch with pitch=32 you waste CPU time in the sense that you need to perform the if() .. check for up to 32*samples_per_fragment. This is not that much since when running real time samples_per_fragment can be as low as 32 or 64 thus 64*32 = 2048. You perform the if() check 2048 out of possibily hundred thousands of times (assuming each sample is around 100k). This means the waste of CPU is only a few % while still allowing arbitrary modulation with some upward limits. (the limitation pitch up modulation of +5 octave) c) allow only linear ramping between two pitch events eg at each iteration you do: playback_ptr+=pitch; pitch+=delta_pitch; Complex pitch modulation would be emulated through many linear ramps, sending pitch events. The linear behaviour of the pitch lets you easily calculate the position where you need to reset the playback pointer to the loop_start position. So what do you think about a) , b) and c) ? Personally I prefer a) or b) , if a) does not waste that much CPU I'd like to use this method since it allows flexible pitch modulation. But probably the impact will not be negligible. Does a d) solution that is more efficient than the above ones exist ? Your thoughts and comments please. Below I'm responding to other issues raised in the last messages in order to avoid spamming the list too much: Steve, Josh: Regarding the IIWU synth I tried it today in conjunction with with the large fluid sound font: http://inanna.ecs.soton.ac.uk/~swh/fluid-unpacked/ IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. I took a look at the voice generation routines and it is all pretty much "hard coded" plus it uses integer math (integer,fractional parts etc) which is not that efficient as you might think. I tested it on my dual Celeron 366 and when playing MIDI files it often cannot keep up because the CPU load goes to 100%. The same MIDI file played in Timidity on the same box works ok without drop outs. I do not want to criticize IIWU here, I think the authors have done quite a nice work but I don't see it suitable as base for our "sampler constrution kit" or like Steve H. said "maybe the question should be whether it's easier to add disk streaming to FluidSynth". I know some of you want quick results or say "if we set the target too high we will not reach it and developers might loose interest etc", but I think the open source world still lack a very well thought out, flexible and efficient sampling engine and this takes some time. Sure, we can learn a lot from Fluid, perhaps turning it into a SoundFont playback module for LinuxSampler but I do not envision LinuxSampler as an extension of Fluid. Phil K. : Regarding the GUI , socket and DMIDI issues: As some of you said, it is better to separate GUI and (MIDI) real time control sockets. The GUI can easily use the real time socket to issue MIDI commands etc. I think we should provide an intermediate layer for handling these real time messages so that one can easily support multiple backends like DMIDI, alsa-seq, raw-midi, etc. Alex Klein: Hi, welcome on board. If you have good experience with widows audio sw samplers , in particular gigastudio this is ideal since you can do side-to-side comparisions, suggest improvements, check performance, etc. (especially since you said you have lots of spare time in the next months :-) ) Regarding VST support in linuxsampler, according to this: http://eca.cx/lad/2002/11/0109.html VST for Linux would need some modifications in the headers and asking Steinberg for the permission to redistribute the result. That given, you could quite easily port to Linux the DSP processing part of VST plugins where the source is available. The GUI is another issue and would probably require a complete rewrite except the author has used some cross-platform toolkit like Qt etc. But as you know the native plugin API for Linux si LADSPA and we will of course support it in LinuxSampler (mainly for FX processing). Regarding the JACK issues that Matthias W raised: I'm no JACK expert but I hope that JACK supports running a JACK client directly in it's own process space as it were a plugin. This would save unnecessary context switches since there would be only one SCHED_FIFO process runnnig. (Do the JACK ALSA I/O modules work that way ? ) cheers Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Phil K. <phi...@el...> - 2002-11-07 10:17:09
|
Nope, this gives us more flexibility being able to do this. -P On Thu, 2002-11-07 at 09:18, Steve Harris wrote: > On Thu, Nov 07, 2002 at 12:11:51 +0000, Phil Kerr wrote: > > > > I think most of the other controls of the engine will be MIDI based > > > data: notes, > > > > pitchbend, cc data. > > > > > > Right, but that is more of a realtime control thing, than a UI thing. > > > The > > > UI could use those (eg. for testing), but doesn't have to. MIDI can be > > > received either my alsa midi, or dmidi or whatever. > > > > It could be a mixture of both. I can alter, for example, filter setting from > > either the front panel or via sysex. > > Right, but nothing stops the UI from opening a DMIDI connection to the > engine as well as the UI socket. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: Steve H. <S.W...@ec...> - 2002-11-07 09:18:23
|
On Thu, Nov 07, 2002 at 12:11:51 +0000, Phil Kerr wrote: > > > I think most of the other controls of the engine will be MIDI based > > data: notes, > > > pitchbend, cc data. > > > > Right, but that is more of a realtime control thing, than a UI thing. > > The > > UI could use those (eg. for testing), but doesn't have to. MIDI can be > > received either my alsa midi, or dmidi or whatever. > > It could be a mixture of both. I can alter, for example, filter setting from > either the front panel or via sysex. Right, but nothing stops the UI from opening a DMIDI connection to the engine as well as the UI socket. - Steve |
From: Josh G. <jg...@us...> - 2002-11-07 05:59:46
|
On Wed, 2002-11-06 at 09:44, Steve Harris wrote: > On Wed, Nov 06, 2002 at 09:35:33AM -0800, Josh Green wrote: > > peer jamming on LAN/internet). Of course much of this may fit well with > > the goals of LinuxSampler. iiwusynth (now called FluidSynth) may also > > have much to offer, although I'm not sure if the author's know about > > this project yet. I'll shoot an email over to the iiwusynth devel list. > > It would be a shame for all of us to re-invent the wheel and then find > > that the wheels don't even work together :) Cheers. > > Interesting, maybe the question should be whether its easier to add disk > streaming to FluidSynth, I played with iiwusynth a fews months back and it > seemed very impressive. > > - Steve > FluidSynth has been a little quite for a while (as well as my own project) but things seem to be picking up again. One of the developers, Markus Nentwig, is doing lots of optimization work. We were just recently making a list of future plans, one of them being sample streaming support (including disk streaming). I really think that Swami/FluidSynth are quite nice and that a lot of people are missing out by not trying them :) Cheers. Josh Green |
From: Frank N. <bea...@we...> - 2002-11-07 00:03:05
|
Hi, Benno wrote: [...] > Steve as suspected there are people that agree with me that when loading > AKAI samples > in RAM you can easily end up burning 256MB of RAM which is a lot for not > high-end PCs. > Let's see how the discussion evolves ... AKAI experts your saying ? First of all, hi :-). I joined the list last week but only lurked so far (and I am not sure if I'll able to contribute a lot to this project, but..oh well). Second, I am no AKAI expert, but at least I have an S2000 with 32 MB RAM at home (so I should be able to give some informations about how "the real thing" is done) and I also got a couple of sampling CDs for it. My main interest is to be able to use that beast in my MIDI setup at home, and that's what a small hobby project I pursue for quite a while now is focussed at (no comments here; I need to be able to release something first :). What I can add to this discussion right now is that of the sampling CDs I have here, most instrument sets are rather small; the largest I have don't even fill the 32MB RAM of the sampler; though, there are certainly much larger sampling collections out there. But when I had the opportunity to play a little with a Windows-based music system recently (using a Creamware Pulsar Scope board and Cubase) and checked out the software sampler modules that come with the Scope, I found that it typically expected sample sets no larger than 100 MB (I believe that was the fixed upper bound). All sampling CDs I could use there were also much smaller (per "instrument") than 32 MB. However, by creating several layers you could easily multiply the memory requirements by 2, 3 or 4. Frank PS: I recently found a nice sample resource@AKAI: ftp://ftp.akaipro.co.jp/pub/downloads They have a couple of soundsets for MPC2000XL and S5000/S6000. The few sets I managed to download so far sounded quite good (most even in stereo). There is one especially interesting piece which is a 190 MB zip archive of a stereo piano..sounds like a good test candidate :-). I was happy to see that Paul Kellett's AKAI file format information page should fully suffice to be able to parse the program file (*.AKP). |
From: Phil K. <phi...@el...> - 2002-11-06 22:54:34
|
Quoting Steve Harris <S.W...@ec...>: > On Wed, Nov 06, 2002 at 09:49:40 +0000, Phil Kerr wrote: > > Aaaah, I see :) > > > > So we have n number of engines and a GUI broadcasts a message to see > what's > > there. Each engine reponds with config data. > > I was thinking there would be a broker, and that could do the selection. > Broadcast is a bit problematic. > > Probably the only info needed would be that engine the UI is for, and > what version of the UI protocol it speaks (internal to the UI<->engine > connection), eg: > > <engine> > <type>linux-akai</type> > <protocol major="0" minor="1" micro="1" /> > </engine> > > Other stuff can be done between the UI and engine privatly, other things > dont need to be able to understand it. > Yes, this looks good. > > I think most of the other controls of the engine will be MIDI based > data: notes, > > pitchbend, cc data. > > Right, but that is more of a realtime control thing, than a UI thing. > The > UI could use those (eg. for testing), but doesn't have to. MIDI can be > received either my alsa midi, or dmidi or whatever. It could be a mixture of both. I can alter, for example, filter setting from either the front panel or via sysex. -P > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
From: Phil K. <phi...@el...> - 2002-11-06 22:33:38
|
Yes. Local MIDI control using ALSA shouldn't be a problem and it may work out best to have two, or more, ports for remote control. I'm not 100% if there are any issues with contex switching in the TCP stack but I don't see that there would be too many situations where you would be hammering the GUI and control ports at the same time. -P Quoting "Richard A. Smith" <rs...@bi...>: > On Wed, 6 Nov 2002 17:46:34 +0000, Steve Harris wrote: > > > > For real-time control DMIDI would be good as it allows hardware > control > > > and the syntax checking is minimal. This is where using XML instead > of > > > CC or SYSEX messages would be too heavy. > > > > Argh, yes! I think we were talking at cross purposes. I was just > thinking > > of the service discovery phase, obviously the control protocol wants > to be > > be binary nad lightweight. > > > > > I think it would be good if we could find a split point between what > can > > > be controlled by a GUI and what can be controlled by MIDI hardware, > even > > > though there's overlap to an extent. > > What kind of real-time GUI control are we talking about? > > Perhaps I'm showing my sampler ignorance here, but seems to me that > control via MIDI and control via socket are 2 sperate entities in 2 > seperate worlds with very different manners of operation. So why try > to handle them with the same system? > > If you wanted to send the engine say something like DMIDI data then > shouldn't that be on a seperate socket? I'm not sure that the > one-socket-do-it-all approach makes any sense. Most of the GUI stuff > is all patch uploading, layering control, asignment of channels, loop > points, graph setup, etc, all non-real time stuff. > > I suppose if you are controlling it via a software sequencer then > there would be a good bit of real-time type data but I would think > thats much better handled via the midi system rather than a general > purose GUI control port. > > > -- > Richard A. Smith Bitworks, Inc. > rs...@bi... 479.846.5777 x104 > > Sr. Design Engineer http://www.bitworks.com > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
From: Richard A. S. <rs...@bi...> - 2002-11-06 21:01:25
|
On Wed, 6 Nov 2002 17:46:34 +0000, Steve Harris wrote: > > For real-time control DMIDI would be good as it allows hardware control > > and the syntax checking is minimal. This is where using XML instead of > > CC or SYSEX messages would be too heavy. > > Argh, yes! I think we were talking at cross purposes. I was just thinking > of the service discovery phase, obviously the control protocol wants to be > be binary nad lightweight. > > > I think it would be good if we could find a split point between what can > > be controlled by a GUI and what can be controlled by MIDI hardware, even > > though there's overlap to an extent. What kind of real-time GUI control are we talking about? Perhaps I'm showing my sampler ignorance here, but seems to me that control via MIDI and control via socket are 2 sperate entities in 2 seperate worlds with very different manners of operation. So why try to handle them with the same system? If you wanted to send the engine say something like DMIDI data then shouldn't that be on a seperate socket? I'm not sure that the one-socket-do-it-all approach makes any sense. Most of the GUI stuff is all patch uploading, layering control, asignment of channels, loop points, graph setup, etc, all non-real time stuff. I suppose if you are controlling it via a software sequencer then there would be a good bit of real-time type data but I would think thats much better handled via the midi system rather than a general purose GUI control port. -- Richard A. Smith Bitworks, Inc. rs...@bi... 479.846.5777 x104 Sr. Design Engineer http://www.bitworks.com |
From: Steve H. <S.W...@ec...> - 2002-11-06 20:52:51
|
On Wed, Nov 06, 2002 at 09:49:40 +0000, Phil Kerr wrote: > Aaaah, I see :) > > So we have n number of engines and a GUI broadcasts a message to see what's > there. Each engine reponds with config data. I was thinking there would be a broker, and that could do the selection. Broadcast is a bit problematic. Probably the only info needed would be that engine the UI is for, and what version of the UI protocol it speaks (internal to the UI<->engine connection), eg: <engine> <type>linux-akai</type> <protocol major="0" minor="1" micro="1" /> </engine> Other stuff can be done between the UI and engine privatly, other things dont need to be able to understand it. > I think most of the other controls of the engine will be MIDI based data: notes, > pitchbend, cc data. Right, but that is more of a realtime control thing, than a UI thing. The UI could use those (eg. for testing), but doesn't have to. MIDI can be received either my alsa midi, or dmidi or whatever. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 20:32:27
|
Aaaah, I see :) So we have n number of engines and a GUI broadcasts a message to see what's there. Each engine reponds with config data. So, what data does the GUI need? An instance identifier. What MIDI chan. is it on. Patch details (sample info, adsr, filter, loop points). Effect details. JACK connection info. ..... Others The GUI would be greatly enhanced if it can display waveform data so there should be a mechanism of passing this from the engine. It doesn't need to be a dump of the sample data, just enough info to do editing. I think most of the other controls of the engine will be MIDI based data: notes, pitchbend, cc data. What about uploading samples from the GUI to the engine remotely? Thoughts? -P Quoting Steve Harris <S.W...@ec...>: > On Wed, Nov 06, 2002 at 05:34:21PM +0000, Phil Kerr wrote: > > For config data using XML has clear cross-platform advantages and > > XML-RPC is a lot lighter than SOAP and could serve our needs well. > > I was thinking of raw XML, not XML-RPC. > > > For real-time control DMIDI would be good as it allows hardware > control > > and the syntax checking is minimal. This is where using XML instead > of > > CC or SYSEX messages would be too heavy. > > Argh, yes! I think we were talking at cross purposes. I was just > thinking > of the service discovery phase, obviously the control protocol wants to > be > be binary nad lightweight. > > > I think it would be good if we could find a split point between what > can > > be controlled by a GUI and what can be controlled by MIDI hardware, > even > > though there's overlap to an extent. > > Yes, probably. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
From: Steve H. <S.W...@ec...> - 2002-11-06 17:46:45
|
On Wed, Nov 06, 2002 at 05:34:21PM +0000, Phil Kerr wrote: > For config data using XML has clear cross-platform advantages and > XML-RPC is a lot lighter than SOAP and could serve our needs well. I was thinking of raw XML, not XML-RPC. > For real-time control DMIDI would be good as it allows hardware control > and the syntax checking is minimal. This is where using XML instead of > CC or SYSEX messages would be too heavy. Argh, yes! I think we were talking at cross purposes. I was just thinking of the service discovery phase, obviously the control protocol wants to be be binary nad lightweight. > I think it would be good if we could find a split point between what can > be controlled by a GUI and what can be controlled by MIDI hardware, even > though there's overlap to an extent. Yes, probably. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-06 17:44:18
|
On Wed, Nov 06, 2002 at 09:35:33AM -0800, Josh Green wrote: > peer jamming on LAN/internet). Of course much of this may fit well with > the goals of LinuxSampler. iiwusynth (now called FluidSynth) may also > have much to offer, although I'm not sure if the author's know about > this project yet. I'll shoot an email over to the iiwusynth devel list. > It would be a shame for all of us to re-invent the wheel and then find > that the wheels don't even work together :) Cheers. Interesting, maybe the question should be whether its easier to add disk streaming to FluidSynth, I played with iiwusynth a fews months back and it seemed very impressive. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 17:42:10
|
Both approaches have advantages. The high-level, using XML, approach gives us a far richer dataset to work with. The low-level allows for better MIDI hardware support. There's probably a cut-off point where one method is better than the other. For config data using XML has clear cross-platform advantages and XML-RPC is a lot lighter than SOAP and could serve our needs well. For real-time control DMIDI would be good as it allows hardware control and the syntax checking is minimal. This is where using XML instead of CC or SYSEX messages would be too heavy. I think it would be good if we could find a split point between what can be controlled by a GUI and what can be controlled by MIDI hardware, even though there's overlap to an extent. Cheers -P On Wed, 2002-11-06 at 15:58, Steve Harris wrote: > On Wed, Nov 06, 2002 at 03:34:58PM +0000, Phil Kerr wrote: > > My idea's were similar but are lower level, more akin to ARP broadcasts. > > I think the higher level has advantages. > > > Although SOAP and WSDL are good choices in themselves they are heavy and > > having to include XML parsers does add bloat. I'm not sure if going > > down the CORBA road would be good but it does work. > > Yes, I wasn't suggesting SOAP or WSDL for connection level service > discovery, they are much too heavy, though that has nothing to do with > XML. I dont think you can describe the overhead of an XML parser > (especialy libxml) on a sampler engine as bloat. CORBA would be however. > > I would be nice if engines or guis could be written on any platform that > can support TCPIP and XML, without requreing the use of low level DMIDI > stuff. XML also removes the need for syntax descriptions and so on, you > just publish a DTD or schema. > > > John has started work on a coding guide to MWPP and we should be able to > > use his examples. Do we have access to any OSC code? > > Sure, the reference implementation is available, and John L. has > implemented support for it in saol IIRC. The origianl reference > implementation was a mess, but I think its been cleanedup and modernised > since then. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: Josh G. <jg...@us...> - 2002-11-06 17:33:23
|
On Wed, 2002-11-06 at 06:18, Steve Harris wrote: > On Tue, Nov 05, 2002 at 11:35:24PM -0800, Josh Green wrote: > > - Using GObject, a C based object system (GStreamer uses this). This > > gives us an object system with C the lowest common denominator > > programming language. GObject also has an easy python binding code > > generator that is part of the pygtk package. > > Cool. Does GObject have c++ bindings for the c++ nazis here ;) > GObject is part of glib 2.0 and is used by GTK+ 2.0. Therefore the gtkmm (C++ bindings for GTK) contains GObject bindings as well. I don't know the details of this though or whether the GObject stuff can be used by itself without the GTK dependency. > > - How well would the object system libInstPatch uses fit with a real > > time synthesizer? What kind of system could be used to associate > > synthesizer time critical data with patch objects? > > It would probably have to be abstracted in some way, the deal with RT-ish > audio software is you get handed a buffer big enough for 64 samples (for > example), and you have <<1 ms to fill it then return. This means, no > malloc, no file i/o, no serious data traversal. > I am also pretty familiar with the requirements of real time synthesis. That is why I was posing that question. Since patch parameters and objects are multi-thread locked for various operations they might not be very friendly to real time situations. That is why I was trying to think of some sort of parameter caching system, where each active preset has its own parameter space that the synth has direct access to and can define its own values (it would need this for its internal variables anyways). This could then be synchronized with the object system. I perceive it as a trade off between patch parameter flexibility and speed. Creating 2 systems that synchronize with each other could give us the best of both worlds. > I would imagine that most engines will preprepare large linear sample > buffers of the audio that there most likly to be required to play, then > when there RT callback gets called they can just copy from the prepared > buffers (maybe apply post processing effects if they have RT controls) > and then return. > Streaming from disk is I think the answer. Of course small samples could be cached in RAM (Linux does this for us to some extent anyways). What I'm concerned with currently is the patch information and parameters and its real time response in querying it (what the synth engine would be doing). > If the post processing only contains effects that are controlled by > time (static envelopes, retriggered LFOs etc.), not by user interaction > (MIDI, sliders, dynamic enveopes,whatever) then they could be applied > ahead of time. But maybe that is too special a case. > Well I suppose there might be some optimization that could occur when there aren't any real time controls connected to a synthesis parameter. SoundFont uses modulators to connect parameters to MIDI controls. > - Steve > BTW if anyone has yet to try my program Swami (http://swami.sourceforge.net) you may want to do so :) It currently uses iiwusynth as its software synthesizer which has a lot of the features we are talking about but uses SoundFont files as its basis synthesis format. Modulator support is one of the things already working in iiwusynth (even loop points can now be modulated, but in CVS of Swami and iiwusynth only). The underlying architecture of Swami is all GObject and in my opinion very flexible and powerful (plugins, wavetable objects, etc). I have many plans to create instrument patch oriented applications with it (Python binding, online web patch database, multi peer jamming on LAN/internet). Of course much of this may fit well with the goals of LinuxSampler. iiwusynth (now called FluidSynth) may also have much to offer, although I'm not sure if the author's know about this project yet. I'll shoot an email over to the iiwusynth devel list. It would be a shame for all of us to re-invent the wheel and then find that the wheels don't even work together :) Cheers. Josh Green |
From: Steve H. <S.W...@ec...> - 2002-11-06 15:58:09
|
On Wed, Nov 06, 2002 at 03:34:58PM +0000, Phil Kerr wrote: > My idea's were similar but are lower level, more akin to ARP broadcasts. I think the higher level has advantages. > Although SOAP and WSDL are good choices in themselves they are heavy and > having to include XML parsers does add bloat. I'm not sure if going > down the CORBA road would be good but it does work. Yes, I wasn't suggesting SOAP or WSDL for connection level service discovery, they are much too heavy, though that has nothing to do with XML. I dont think you can describe the overhead of an XML parser (especialy libxml) on a sampler engine as bloat. CORBA would be however. I would be nice if engines or guis could be written on any platform that can support TCPIP and XML, without requreing the use of low level DMIDI stuff. XML also removes the need for syntax descriptions and so on, you just publish a DTD or schema. > John has started work on a coding guide to MWPP and we should be able to > use his examples. Do we have access to any OSC code? Sure, the reference implementation is available, and John L. has implemented support for it in saol IIRC. The origianl reference implementation was a mess, but I think its been cleanedup and modernised since then. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 15:42:48
|
Thanks Steve, (note to self, press reply to all not reply ;) My idea's were similar but are lower level, more akin to ARP broadcasts. An app broadcasts the DMIDI meta-command for 'who's there' and all devices reply. The session could look something like: Enquirer -> ff:ff:00:00 Device 1 -> ff:ff:00:01 [node id] description Device 2 -> ff:ff:00:01 [node id] description The exact format hasn't been 100% defined yet but it should be close to that of above. I've some very basic code for this but it hasn't been integrated into anything fully working. Although SOAP and WSDL are good choices in themselves they are heavy and having to include XML parsers does add bloat. I'm not sure if going down the CORBA road would be good but it does work. John has started work on a coding guide to MWPP and we should be able to use his examples. Do we have access to any OSC code? -P On Wed, 2002-11-06 at 14:58, Steve Harris wrote: > [putting this back on the list] > > On Wed, Nov 06, 2002 at 02:26:40PM +0000, Phil Kerr wrote: > > The biggest difference between DMIDI and MWPP is it's networking scope. > > MWPP has been optimised for Internet transmission whereas DMIDI is > > optimised for LAN's and is much closer to raw MIDI. The device > > identification schema also follows MIDI closer than MWPP which uses > > SIP. > > OK, makes sense. I gues that means we should support both, and probably > OSC, as that seems quite popular and has better resultion than MIDI. > It is also UDP based IIRC. > > All incoming event protocols should be normalised in some way IMHO. > > > You mentioned a service discovery mechanism, do you mean some form of > > application introspection that can be broadcast? This is an interesting > > area I've been doing some thinking about. Any further thoughts in this > > area? > > Yes, thats it. I've done some work in this area. > > There are two forms of serveice discovery, syntaic (API level) and > semantic (er, harder to define ;) but we probably dont need it). > > There are some existing standards, eg. WSDL (web sevice description > language, http://www.w3.org/TR/wsdl), but they tend to be SOAP (XML-RPC) > biased, and are probably overkill for what we need unless you want to > describe specific services, eg. an envelope editor. > > Basicly the deal is you have either a well know broker or broadcast your > requirements ("I'm an Akai GUI, protocol v3.4, I want an engine"), and you > get back a list of candidates. XML is your friend here. > > The scheme I would suggest goes something like: > > Broker fires up > Engines reister with Broker (easy if the top level engine is also the Broker) > UI starts, queries Broker > Broker queries each engine in turn to find out if it can handle the UI > Broker retrurns list to UI > > This also scales to multiple Engines on multiple machines, it just means > that the Engine has to register with an external Broker. > > If the Engines are external, but they hold open a socket to the Broker > then they can tell if it sudenly goes away (or vice versa), which helps. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |