You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
From: Steve H. <S.W...@ec...> - 2002-11-11 23:34:50
|
On Mon, Nov 11, 2002 at 09:52:58 +0100, Matthias Weiss wrote: > > The difference is that it allows you to (possibly) get a bit lower > > latency. With out-of-process on my current system I can get down to 128 > > sample blocks, with in process and changing my filesystem I could get down > > to 64. Hoever I generaly run at 256, as I dont really care about latency > > that much, and I can get more processing done at 256. > > Whith what applications do you get this result? all the ones I've tested (ardour, eca*, meterbridge, freqtweak, pd), the 64 figure is hypothetical, I've not run this box at that speed. > Paul Davis speculated on the lad list that a simple client might just > have the same latency as an in-process client - as said, I'd like to > test it out. Yes iut should, it just varies the worst case, if the worst case isn't bad enough to make you fail to complete in time it wont make any difference whether its IP or OOP. The latency thing is basicly boolean, if you complete within the ~3ms you're OK, if you dont you're not. Simple :) - Steve |
From: Matthias W. <mat...@in...> - 2002-11-11 21:23:43
|
Hi all! I'd like to know your opinion about some thoughts regarding the GUI. As mentioned by Benno and Steve, when linuxsampler will run as a plugin of jackd, it'll be necessary to use some sort of communication mechanism between the plugin and the GUI. While the stability plus and the code cleanness due to the forced separation of engine code/GUI code are very attractive, there are also some difficulties connected with it. I think linuxsampler should also have the ability to create a sample instrument/set. Therefore it will be necessary to edit wave files, set loop points etc. Now the first problem that occures to me: how do I generate the waveview of the sample data when the samples are on one machine and the GUI on the other? Should I pregenerate the sample view data on the plugin machine and send it to the GUI machine? And if I edit the sample, should the edit commands be send over the net to the plugin machine, the plugin calculates the result, obtains the new sample view datas and sends them back? This seems to get really complicated. It also makes an idea of me impossible to imbed an existing wave editor as a component (e.g. as a bonobo component) into linuxsampler. In that case we would not have to reinvent the weel and write the XXX's wave editor but use an existing one (e.g. snd). But AFAIK the existing ones have no capabilities of remote control. matthias |
From: Matthias W. <mat...@in...> - 2002-11-11 20:56:34
|
On Sun, Nov 10, 2002 at 10:56:49PM +0000, Steve Harris wrote: > > My concern is that jackd is a "single point of error", if it crashes all > > other jack clients won't run. > > Thats why it's /really/ stable :) I guess I know what you're saying ;-), though my thought was on inherent stability. > > > I'd not give up the stability advantage of an out-of-process jack client > > for some 1/10 msec's. On the other hand if the difference in latency is > > a factor 2 or more I think it's worth the price. A comparsion of both variants > > would be enlightening. I'd like to write some test code but I won't have > > time for that before december. > > The difference is that it allows you to (possibly) get a bit lower > latency. With out-of-process on my current system I can get down to 128 > sample blocks, with in process and changing my filesystem I could get down > to 64. Hoever I generaly run at 256, as I dont really care about latency > that much, and I can get more processing done at 256. Whith what applications do you get this result? Paul Davis speculated on the lad list that a simple client might just have the same latency as an in-process client - as said, I'd like to test it out. matthias |
From: Steve H. <S.W...@ec...> - 2002-11-11 08:58:42
|
On Sun, Nov 10, 2002 at 11:14:53 -0800, Josh Green wrote: > On Sun, 2002-11-10 at 14:53, Steve Harris wrote: > > On Sun, Nov 10, 2002 at 11:34:00 -0800, Josh Green wrote: > > > Jack depends on glib (although I don't think it requires 2.0). > > > > Actually jack doesn't depend on glib anymore, it was causing problems with > > other apps that depeneded on glib. > > What kind of problems? As in 1.2.x/2.0 version conflicts? I'd like to > make sure I don't run into the same issues with my libraries and the > like. Cheers. I think so. I all came out when we tried to make it build against glib 1.2 or 2.0, it turned out to be complicated and we only needled lists anyway. I think that if you link against a library both you and it have to be using the same major glib ver., you should chech though, I wasn;t following hte discussion very closely. - Steve |
From: Josh G. <jg...@us...> - 2002-11-11 07:13:39
|
On Sun, 2002-11-10 at 14:53, Steve Harris wrote: > On Sun, Nov 10, 2002 at 11:34:00 -0800, Josh Green wrote: > > Jack depends on glib (although I don't think it requires 2.0). > > Actually jack doesn't depend on glib anymore, it was causing problems with > other apps that depeneded on glib. > > - Steve > What kind of problems? As in 1.2.x/2.0 version conflicts? I'd like to make sure I don't run into the same issues with my libraries and the like. Cheers. Josh Green |
From: Steve H. <S.W...@ec...> - 2002-11-10 22:56:53
|
On Sun, Nov 10, 2002 at 09:32:11 +0100, Matthias Weiss wrote: > > Yes, that's the price we have to pay but crashing is not an option. > > My concern is that jackd is a "single point of error", if it crashes all > other jack clients won't run. Thats why it's /really/ stable :) > I'd not give up the stability advantage of an out-of-process jack client > for some 1/10 msec's. On the other hand if the difference in latency is > a factor 2 or more I think it's worth the price. A comparsion of both variants > would be enlightening. I'd like to write some test code but I won't have > time for that before december. The difference is that it allows you to (possibly) get a bit lower latency. With out-of-process on my current system I can get down to 128 sample blocks, with in process and changing my filesystem I could get down to 64. Hoever I generaly run at 256, as I dont really care about latency that much, and I can get more processing done at 256. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-10 22:54:04
|
On Sun, Nov 10, 2002 at 11:34:00 -0800, Josh Green wrote: > Jack depends on glib (although I don't think it requires 2.0). Actually jack doesn't depend on glib anymore, it was causing problems with other apps that depeneded on glib. - Steve |
From: Matthias W. <mat...@in...> - 2002-11-10 20:35:47
|
On Fri, Nov 08, 2002 at 11:53:15PM +0100, Benno Senoner wrote: > On Fri, 2002-11-08 at 22:57, Matthias Weiss wrote: > >> > > Having jackd loading plugins, that would also mean if > > a plugin crashes, it takes with it the daemon, right? > > Yes, that's the price we have to pay but crashing is not an option. My concern is that jackd is a "single point of error", if it crashes all other jack clients won't run. > Even when only the sampler module goes down it will still break your > work. I prefer achieving rock solid 2.1 msec latency with the sampler > than having the soundserver not crashing in the case of a plugin > failure in exchange of giving up precious msecs of real time response. I'd not give up the stability advantage of an out-of-process jack client for some 1/10 msec's. On the other hand if the difference in latency is a factor 2 or more I think it's worth the price. A comparsion of both variants would be enlightening. I'd like to write some test code but I won't have time for that before december. matthias |
From: Benno S. <be...@ga...> - 2002-11-10 20:19:53
|
On Sun, 2002-11-10 at 20:34, Josh Green wrote: > > > Of course swami needs a powerful playback engine in order to provide > > what you hear is what you get feel. > > This means that ideally the instrument's editor and playback engine > > should go hand in hand so that the editor can match the engine's > > capabilities and viceversa. > > > > I have been designing Swami with exactly this type of modularity in > mind. I'm looking forward to working with the LinuxSampler project :) nice to have you on board ! Well basically I think your idea is very good but since I am not an expert in terms of "live editing of samples" I have no clear idea what the right solution looks like. One of our problems is that some samples reside in ram and some are streamed from disk with some caching. This means that the editor must be aware about these issues. These are dependent on the sample and engine format thus it is probably wise to keep that kind of code (without duplicating it at editor level) within the sampler. As Juan L. suggested,when providing a GUI for the sampler it is probably the best to control it through a TCP socket so that you can separate the engine from the front end. (eg controlling the sampler remotely on a separate machine , even a windoze box that runs a ported gtk/qt/java GUI :-) ). So if you were asking me I would apply the same paradigm to the instrument editor too. By serializing the access via a socket you do not risk race conditions and the thread that handles the parameter / instrument manipulation can be optimized to not interfere with the real time engine. I think remote editing will be advantageous for the future because it could be that the studio professional might have an "audio processing farm" which is networked with the front end machine which can control each engine in real time. What do you think ? Josh can you briefly describe what a general instrument editor must provide. (since I am not familiar with what capabilities an editor has to provide ) (eg, manipulating the parameters, looping, assign samples to keymaps etc) Keep in mind that we are going to work with both in-ram and on-disk samples possibly very very large libraries. (hundreds of MB / several GB). Plus there is the fact that we want a truly modular sampler where you can wire together basic audio building blocks which represent the final instrument. That means some kind of loader module (possibily save module too) needs to be written in correspondence of each engine. This may sound like sci-fi or some but it isn't and we plan to start with very basic stuff (like a simple RAM sampler with only basic modulation stuff). I think it will pay off to go that way because once the concept is implemented very cool things can be done and I think that stuff like building an efficient AKAI,GIG,SF2 etc sampler will advance very quickly. Thoughts ? Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Josh G. <jg...@us...> - 2002-11-10 19:32:47
|
On Thu, 2002-11-07 at 16:42, Benno Senoner wrote: > On Fri, 2002-11-08 at 01:14, Josh Green wrote: > > Hi Josh, > I do agree on all your points. > Indeed Fluid could provide a solid base for SoundFont playback and it > would be cool if we could collaborate with the Fluid developers to > produce something that becomes really good and powerful. > > Regarding an instrument editor, we strongly need such a beast and you > are probably the most expert in that field. > I'm grateful that you collaborate on the project since without > a good editor it will be hard to create new patches without resorting > to windows software. > I hope that swami becomes really flexible and that in near future it > will let you build and edit DLS2 , GIG and other instrument types. > > Of course swami needs a powerful playback engine in order to provide > what you hear is what you get feel. > This means that ideally the instrument's editor and playback engine > should go hand in hand so that the editor can match the engine's > capabilities and viceversa. > I have been designing Swami with exactly this type of modularity in mind. I'm looking forward to working with the LinuxSampler project :) If both were based on the same patch manipulation library (such as libInstPatch) this might be easily accomplished. I think the issue of flexible multi-threaded patch objects versus real time access of parameters is going to need some thought. Perhaps a parameter cache could be used that the synth engine can access directly and gets synchronized to the patch object system. There would most likely be a lot of internal data that the synth itself needs for primitive synth constructs (like envelops, LFOs, etc) which are really only needed for active voices and could be allocated from a memory pool (no malloc). Do you think it would be okay for LinuxSampler to depend on libInstPatch? This would introduce the following dependencies: libInstPatch GObject 2.0/glib 2.0 Jack depends on glib (although I don't think it requires 2.0). glib/gobject are required by GTK+ and are available on win32 platforms as well as OS-X. glib has a lot of other neat features that would probably benefit the development of LinuxSampler (threading; data types like linked lists, trees, hashes; memory pool functions, etc). The alternative route is to do something like what FluidSynth does. It is currently loaded into Swami as a plugin and there is an SFLoader API in Fluid that allows me to load patch information and data on demand as well as update various generator parameters in real time. This would seem to complicate matters though in the case of multiple patch formats, etc. I think using the same patch library would make the most sense. > cheers, > Benno > Cheers. Josh Green P.S. If anyone is interested in helping out with Swami (http://swami.sourceforge.net) directly, I could really use more developers. The code base is C using GObject and the GUI uses GTK+, the API docs for both those libraries can be found at http://www.gtk.org. |
From: Benno S. <be...@ga...> - 2002-11-08 22:42:15
|
On Fri, 2002-11-08 at 22:57, Matthias Weiss wrote: >> > Having jackd loading plugins, that would also mean if > a plugin crashes, it takes with it the daemon, right? Yes, that's the price we have to pay but crashing is not an option. Even when only the sampler module goes down it will still break your work. I prefer achieving rock solid 2.1 msec latency with the sampler than having the soundserver not crashing in the case of a plugin failure in exchange of giving up precious msecs of real time response. But given that the programming model for in-process and out-of-process is the same, you can just compile the version you like better. (as said the fewer processes that run the bigger the chance that there will not be problems during high load situations). Benno http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Matthias W. <mat...@in...> - 2002-11-08 22:00:38
|
On Thu, Nov 07, 2002 at 07:32:16PM +0100, Benno Senoner wrote: > > > Regarding the JACK issues that Matthias W raised: > > > I'm no JACK expert but I hope that JACK supports running a JACK client > > > directly in it's own process space as it were a plugin. > > > This would save unnecessary context switches since there would be only > > > one SCHED_FIFO process runnnig. > > > (Do the JACK ALSA I/O modules work that way ? ) > > > > Yes, but there is currently no mechanism for loading an in process client > > once the engine has started, however that is merely because the function > > hasn't been written. Both the ALSA and Solaris i/o clients are in process, > > but they are loaded when the engine starts up. > > Ok, at least the engine is designed to work that way (so I guess for > maximum performance some extensions for JACK will be required but I > assume that will not be a big problem) Having jackd loading plugins, that would also mean if a plugin crashes, it takes with it the daemon, right? matthias |
From: Benno S. <be...@ga...> - 2002-11-08 17:45:25
|
On Fri, 2002-11-08 at 13:12, Phil Kerr wrote: > Although this speed is the official spec I've found that ALSA, and a > number of MIDI cards, do not seem to throttle data back to the spec and > will pass data though the interface at reception speed. Although this > is a wider problem for me regarding DMIDI and hardware it's potentially > a good thing when we are dealing with software based D/MIDI applications > (10 Gbit MIDI :). yes, of course that's why I prefer the PC sequencer -> internal MIDI -> PC software synth/sampler solution over using external gear. Timing will be very tigh and chords do not suffer from smearing. In fact in some extreme test cases with my proof-of-concept code where flooded it with aligned note-on events there were CPU overloads due to the voice initialization code eating away a few cycles. Probably we will need to limit the note-on's per fragment in order to level out these CPU load peaks, especially when using very small fragment sizes (32-64 samples) with bigger fragment sizes there is plenty of time for handling the note events. > > So regarding MIDI timing I get the impression that PC host based > interfaces cannot guarantee the 31.25k rate, but testing is not > conclusive. No that is not true. Some sound cards do not have MIDI RX or TX interrupts and force the driver to go into polling mode which means either high CPU usage or low throughput. (see the soundblaster awe 64 midi out -> no tx irq -> midi sucks :-) ) With other cards like the Hoontech you get note-on midi-out-to-in latency that matches the wire speed. Looong time ago I wrote a tittle tool that does this http://www.gardena.net/benno/linux/mididelay-0.1.tgz Paul Davis and others (using decent midi cards) reported 1.1 msec note-on midi-out-to-in latency which matches the wire speed. On the SB AWE64 I got about 10 msec (due the lack of a TX irq) plus when midi out traffic is very heavy the machine slows down to a crawls (using OSS/Free). So before buying a MIDI card pay attention that both RX and TX irqs are present. But my prefered MIDI "cable" between two boxes will be DMIDI over 100Mbit LAN anyway. :-) PS: does sound my RAM sample module reasonable ? (taking into account the suggestions you guys gave here like Steve's modulation rate reduction etc). If there are not objections I'm going to write some code and make the sampling module "audible". :-) cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Phil K. <phi...@el...> - 2002-11-08 12:20:41
|
The MIDI spec defines the bit pattern as one start bit, eight data bits and one stop bit with a transfer rate of 31.25 (+/- 1%) Kbaud, hence 10 bits per message byte. Although this speed is the official spec I've found that ALSA, and a number of MIDI cards, do not seem to throttle data back to the spec and will pass data though the interface at reception speed. Although this is a wider problem for me regarding DMIDI and hardware it's potentially a good thing when we are dealing with software based D/MIDI applications (10 Gbit MIDI :). So regarding MIDI timing I get the impression that PC host based interfaces cannot guarantee the 31.25k rate, but testing is not conclusive. -P On Fri, 2002-11-08 at 11:29, chr...@ep... wrote: > >Further to this, serial MIDI only runs at 31.250 kbaud, which means that > >the maximum resolution of a MIDI note on is 42 samples at 44.1kHz (30 bits > >at 31250bits/s). > > > >Obviously things like OSC have greater time resolution, but it shows that > >sample accurate note triggering isn't essential. It may something worth > >droppping for efficiency. > > I definitely agree with that, as notes will be triggered by pure MIDI > devices in most cases anyway. And even if not, sample accurate triggering > is just overkill. > > I think doubling MIDIs frequency (in case of note on that would be > 2*1067Hz = 2133Hz) would be enough for the internal rate. That way you > preserve at least MIDIs resolution, no? > > (BTW note on has 3 Bytes, what are the remaining 6 Bits for? CRC?) > > > > > > > > > ________________________________________ > Online Fotoalben - jetzt kostenlos bei http://www.ePost.de > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: <chr...@ep...> - 2002-11-08 11:29:56
|
>Further to this, serial MIDI only runs at 31.250 kbaud, which means that= >the maximum resolution of a MIDI note on is 42 samples at 44.1kHz (30 bi= ts >at 31250bits/s). > >Obviously things like OSC have greater time resolution, but it shows tha= t >sample accurate note triggering isn't essential. It may something worth >droppping for efficiency. I definitely agree with that, as notes will be triggered by pure MIDI devices in most cases anyway. And even if not, sample accurate triggering= is just overkill. I think doubling MIDIs frequency (in case of note on that would be 2*1067Hz =3D 2133Hz) would be enough for the internal rate. That way you preserve at least MIDIs resolution, no? (BTW note on has 3 Bytes, what are the remaining 6 Bits for? CRC?) ________________________________________ Online Fotoalben - jetzt kostenlos bei http://www.ePost.de |
From: Benno S. <be...@ga...> - 2002-11-08 00:32:04
|
On Fri, 2002-11-08 at 01:14, Josh Green wrote: Hi Josh, I do agree on all your points. Indeed Fluid could provide a solid base for SoundFont playback and it would be cool if we could collaborate with the Fluid developers to produce something that becomes really good and powerful. Regarding an instrument editor, we strongly need such a beast and you are probably the most expert in that field. I'm grateful that you collaborate on the project since without a good editor it will be hard to create new patches without resorting to windows software. I hope that swami becomes really flexible and that in near future it will let you build and edit DLS2 , GIG and other instrument types. Of course swami needs a powerful playback engine in order to provide what you hear is what you get feel. This means that ideally the instrument's editor and playback engine should go hand in hand so that the editor can match the engine's capabilities and viceversa. cheers, Benno > > > > My main interest, as far as contribution to LinuxSampler, is concerning > patch manipulation and GUI editor front end. I think the goals of Swami > fit in with this, and welcome any comments agreeing or disagreeing with > that. I'm not a direct developer of FluidSynth (beyond tracking down > bugs and suggesting things), so I can't really answer specifics about > it. > > I was not necessarily suggesting that LinuxSampler should be based off > of FluidSynth, only that there is probably a bit of information that > could be gained from existing projects such as that one. > > As to performance, I'm not really familiar with Timidity's SoundFont > capabilities. It seems a bit crude to me to just compare them side by > side without knowing what features are enabled. For instance Timidity > might not have Reverb/Chorus enabled by default. There have also been > some recent gains in performance in CVS. That being said, I really > should check out Timidity's capabilities sometime (I have never actually > had it working, although admittedly I have not tried for a while). > > While performance with FluidSynth leaves a lot to be desired, it does > have a fairly complete implementation of the SoundFont specification > (still missing some things though). > > > > > cheers > > Benno > > > > Cheers. > Josh Green -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Josh G. <jg...@us...> - 2002-11-08 00:13:00
|
On Thu, 2002-11-07 at 08:14, Benno Senoner wrote: > > Steve, Josh: > Regarding the IIWU synth I tried it today in conjunction with with the > large fluid sound font: > http://inanna.ecs.soton.ac.uk/~swh/fluid-unpacked/ > > IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. > I took a look at the voice generation routines and it is all pretty much > "hard coded" plus it uses integer math (integer,fractional parts etc) > which is not that efficient as you might think. > I tested it on my dual Celeron 366 and when playing MIDI files it often > cannot keep up because the CPU load goes to 100%. > The same MIDI file played in Timidity on the same box works ok without > drop outs. > I do not want to criticize IIWU here, I think the authors have done > quite a nice work but I don't see it suitable as base for our "sampler > constrution kit" or like Steve H. said "maybe the question should be > whether it's easier to add disk streaming to FluidSynth". > I know some of you want quick results or say "if we set the target too > high we will not reach it and developers might loose interest etc", > but I think the open source world still lack a very well thought out, > flexible and efficient sampling engine and this takes some time. > Sure, we can learn a lot from Fluid, perhaps turning it into a SoundFont > playback module for LinuxSampler but I do not envision LinuxSampler as > an extension of Fluid. > My main interest, as far as contribution to LinuxSampler, is concerning patch manipulation and GUI editor front end. I think the goals of Swami fit in with this, and welcome any comments agreeing or disagreeing with that. I'm not a direct developer of FluidSynth (beyond tracking down bugs and suggesting things), so I can't really answer specifics about it. I was not necessarily suggesting that LinuxSampler should be based off of FluidSynth, only that there is probably a bit of information that could be gained from existing projects such as that one. As to performance, I'm not really familiar with Timidity's SoundFont capabilities. It seems a bit crude to me to just compare them side by side without knowing what features are enabled. For instance Timidity might not have Reverb/Chorus enabled by default. There have also been some recent gains in performance in CVS. That being said, I really should check out Timidity's capabilities sometime (I have never actually had it working, although admittedly I have not tried for a while). While performance with FluidSynth leaves a lot to be desired, it does have a fairly complete implementation of the SoundFont specification (still missing some things though). > > cheers > Benno > Cheers. Josh Green |
From: Benno S. <be...@ga...> - 2002-11-07 23:52:35
|
Hi Frank, thanks for the AKAI FTP link. The other day I was discussing with Steve H. about how to dump AKAI CDs on disk. One way could be a raw dump of the CD image and the other dumping the samples and the program information. Perhaps we should support both in order to make everyone happy. (quite easy to implement) Regarding the large AKAI setup memory requirements ... as said the opnions on this list are mixed, some say to keep it all in RAM others say stream from disk in order to allow working with a larger number of samples. Since the RAM sample module will differ only very little from the disk based sample module we could for example provide both where the disk based version provides less features (eg noo loop point modulation, reverse loops etc). Benno On Thu, 2002-11-07 at 01:03, Frank Neumann wrote: > > First of all, hi :-). I joined the list last week but only lurked so far (and I > am not sure if I'll able to contribute a lot to this project, but..oh well). > http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-07 23:09:51
|
On Thu, Nov 07, 2002 at 07:32:16 +0100, Benno Senoner wrote: > The jitter correction just ensures that the delay will be constant, > exactly one fragment. OK, I'm not sure that is always what you want, but it is a minor issue, and easy to experiment with. > Plus when driven from a sequencer, the sampler can provide sample > accurate audio rendering. (but in that case a time stamped protocol is > probably needed, time stamped MIDI anyone ? ) Yes, I agree here, but time stamped MIDI sounds nasty :) > > Yes, though generally the CV signals run at a lower rate than the audio > > signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing > > pitch data at 1/4 audio rate is more than enough and will save cycles. > > Yes this is a good idea ... perhaps allowing variable CV granularity > or better to run at fixed 1/4 audio rate ? Better to have it variable, or at least in a macro I think. In csound it is selectable per orc file IIRC. > > As long as the compiler issues the branch prediction instruction correctly > > (to hint that the condition will be false), it will be fine. You can check > > this by looking at the .s output. > > How do you check this ? In PIII+ there is an instruction that gets issued, I think. Its one of the things they improved in gcc3.2. The compilers defualt should go the right way anyway. > > If you are refering to phase pointers, then its not an efficientcy issue, > > if you use floating point numbers then the sample will play out of key, > > only slightly, but enough that you can tell. > > Out of key because using 32bit floats does provide only a 24bit mantissa > ? > In my proof of concept code I use 64bit float for the playback pointers > and it works flawlessly even with extreme pitches. OK, doubles should be OK, but it seems a bit wasteful to use doubles for this. Maybe not. [in process] > Ok, at least the engine is designed to work that way (so I guess for > maximum performance some extensions for JACK will be required but I > assume that will not be a big problem) No, it shouldn't be too bad. > > Obviously things like OSC have greater time resolution, but it shows > >that sample accurate note triggering isn't essential. It may something > > worth dropping for efficiency. > > Steve, using your own words, for the efficiency "nazis" we could always > tell > the signal recompiler to #undef the event handling code and compile an > event less version. ;) Yes, absolutly, this is what I meant. The system should know whether its expecting timestamps or not. - Steve |
From: Benno S. <be...@ga...> - 2002-11-07 18:21:29
|
On Thu, 2002-11-07 at 17:43, Steve Harris wrote: > > For example without timestamped events, when using not so small buffer > > sizes, note starts/stop would get quantized to buffer boundaries > > introducing unacceptable timing artifacts. > > That is true in principle, but bear in mind that in many cases the note on > events will be coming in over real time MIDI, and therefore will have to > be processed as soon as posible ie. at the start of the next process() > block. Do not understimate these artifacts, perhaps they go unnoticed with 1 msec audio fragments, but when you drive up the buffer size things are beginning sound weird. And think about the fact that the event can come just a few samples before the current fragment gets to the audio output. This means that the almost one-full-fragment-long-delay will occur anyway. The small delay will go unnoticed. The jitter correction just ensures that the delay will be constant, exactly one fragment. You can calculate the needed delay by looking at either the soundcard's frame pointer or alternatively using gettimeofday() / RDTSC and scale it to the sample rate (44100 nominal or for even more precision calibrate it to the real sample rate) Read this paper in order to convince yourself that the jitter correction is needed in order to provide excellent timing. http://www.rme-audio.de/english/techinfo/lola_latec.htm The price to pay is very low since it involves just a few calculations outside the innermost loop. Plus when driven from a sequencer, the sampler can provide sample accurate audio rendering. (but in that case a time stamped protocol is probably needed, time stamped MIDI anyone ? ) > > Yes, though generally the CV signals run at a lower rate than the audio > signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing > pitch data at 1/4 audio rate is more than enough and will save cycles. Yes this is a good idea ... perhaps allowing variable CV granularity or better to run at fixed 1/4 audio rate ? > > > Or perhaps both models can be used ? (a time stamped event that supplies > > I think a mixture ofhte two is neccesary. I believe that too,but I'd like to hear more opinions. > > > That way we save CPU time and can avoid the statement > > if(sample_ptr >= loop_end_pos) reset_sample_ptr(); > > within audio rendering loop. > > As long as the compiler issues the branch prediction instruction correctly > (to hint that the condition will be false), it will be fine. You can check > this by looking at the .s output. How do you check this ? I mean probably there will be a cmp (compare statement) followed by a conditional jump (jge) , how can the compiler issue branch prediction instructions on x86 ? I thought it is the task of the CPU to figure it out ? > > > IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. > > I took a look at the voice generation routines and it is all pretty much > > "hard coded" plus it uses integer math (integer,fractional parts etc) > > which is not that efficient as you might think. > > If you are refering to phase pointers, then its not an efficientcy issue, > if you use floating point numbers then the sample will play out of key, > only slightly, but enough that you can tell. Out of key because using 32bit floats does provide only a 24bit mantissa ? In my proof of concept code I use 64bit float for the playback pointers and it works flawlessly even with extreme pitches. (I think that with 48bit mantissa it would take quite a lot of errors adding in order to notice that the tune changes). Was this the issue or am I missing something ? > > > Regarding the JACK issues that Matthias W raised: > > I'm no JACK expert but I hope that JACK supports running a JACK client > > directly in it's own process space as it were a plugin. > > This would save unnecessary context switches since there would be only > > one SCHED_FIFO process runnnig. > > (Do the JACK ALSA I/O modules work that way ? ) > > Yes, but there is currently no mechanism for loading an in process client > once the engine has started, however that is merely because the function > hasn't been written. Both the ALSA and Solaris i/o clients are in process, > but they are loaded when the engine starts up. Ok, at least the engine is designed to work that way (so I guess for maximum performance some extensions for JACK will be required but I assume that will not be a big problem) > Further to this, serial MIDI only runs at 31.250 kbaud, which means >that the maximum resolution of a MIDI note on is 42 samples at 44.1kHz >(30 bits> at 31250bits/s). > Yes with 32 sample fragments (my beloved 2.1 msec latency case) you can match midi resolution with at-block-boundary rendering. I think people will want to use bigger buffer sizes too perhaps they want to increase performance or because the particular hardware can't cope with such small buffers. Plus there is the offline audio rendering or from sequencer where sometimes sample accurate rendering can help avoid flanging effects due to small delays in triggering similar or equal waveforms. The price to pay for handling the sample accurate events is quite low because when there are no events pending no CPU is wasted within the innermost loop. the usual way to handle time stamped events within an audio block while(num_samples_before_event=event_pending()) { process(num_samples_before_event); handle_event(); } process(num_samples_after_event); of course event_pending(), process() will probably be inlined macros in order to provide maximum performance, but as you can see, the only overhead over an event-less system is the checking of event_pending() at the beginning of the block and after an event has occurred because it could be that more than one event per fragment occurs. When using lock free fifos or linked lists (probably the lock free fifo is more efficient since it allows asynchronous insertion by other modules) you simply check the presence of an element within the structure which usually involves checking a pointer or doing a subtraction. (lock free fifo) .. not a big deal especially since it lies outside the innermost loop. > Obviously things like OSC have greater time resolution, but it shows >that sample accurate note triggering isn't essential. It may something > worth dropping for efficiency. Steve, using your own words, for the efficiency "nazis" we could always tell the signal recompiler to #undef the event handling code and compile an event less version. :-) I think the recompiler has many advantages like easily being able to provide "simplified" instruments where you do not include LP filters, envelopes etc. Eg in the cases you need only sample playback without any post processing, leave out all DSP stuff and you get an instrument that is faster than the "standard" ones while it does exactly what you want. Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-07 17:02:09
|
> Regarding the note On/Off triggers I would propose to use timestamped > events which allow sample accurate triggering. > For example without timestamped events, when using not so small buffer > sizes, note starts/stop would get quantized to buffer boundaries > introducing unacceptable timing artifacts. Further to this, serial MIDI only runs at 31.250 kbaud, which means that the maximum resolution of a MIDI note on is 42 samples at 44.1kHz (30 bits at 31250bits/s). Obviously things like OSC have greater time resolution, but it shows that sample accurate note triggering isn't essential. It may something worth droppping for efficiency. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-07 16:43:22
|
On Thu, Nov 07, 2002 at 05:14:46 +0100, Benno Senoner wrote: > I'd like you to comment about the design of a simple RAM sample playback ... > At a later stage we can introduce the disk based sample playback module > which will act in a similar way as the RAM version but with some > limitations (eg loop point modulation will not be possibile etc) Good :) > Regarding the note On/Off triggers I would propose to use timestamped > events which allow sample accurate triggering. > For example without timestamped events, when using not so small buffer > sizes, note starts/stop would get quantized to buffer boundaries > introducing unacceptable timing artifacts. That is true in principle, but bear in mind that in many cases the note on events will be coming in over real time MIDI, and therefore will have to be processed as soon as posible ie. at the start of the next process() block. It could be that there will also be non realtime triggering, but if we know there isn't, only handling note on at the start of process() is signiciantly more efficient. If you think about it, handling timestamped events is a pathological case of the branch you describe later. Its worse because the branch predictor can't know which way to go. > In the latter case it is perhaps more efficient to provide a continuous > stream of control values (one for each sample) (Steve does CV models > work like that ?) Yes, though generally the CV signals run at a lower rate than the audio signals (eg. 1/4tr or 1/16th). Like Krate signals in Music N. Providing pitch data at 1/4 audio rate is more than enough and will save cycles. > Or perhaps both models can be used ? (a time stamped event that supplies I think a mixture ofhte two is neccesary. > That way we save CPU time and can avoid the statement > if(sample_ptr >= loop_end_pos) reset_sample_ptr(); > within audio rendering loop. As long as the compiler issues the branch prediction instruction correctly (to hint that the condition will be false), it will be fine. You can check this by looking at the .s output. > IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. > I took a look at the voice generation routines and it is all pretty much > "hard coded" plus it uses integer math (integer,fractional parts etc) > which is not that efficient as you might think. If you are refering to phase pointers, then its not an efficientcy issue, if you use floating point numbers then the sample will play out of key, only slightly, but enough that you can tell. > I know some of you want quick results or say "if we set the target too > high we will not reach it and developers might loose interest etc", > but I think the open source world still lack a very well thought out, > flexible and efficient sampling engine and this takes some time. I think the starting point of a simple RAM based playback module is good. It alays my fears about this. > Regarding the JACK issues that Matthias W raised: > I'm no JACK expert but I hope that JACK supports running a JACK client > directly in it's own process space as it were a plugin. > This would save unnecessary context switches since there would be only > one SCHED_FIFO process runnnig. > (Do the JACK ALSA I/O modules work that way ? ) Yes, but there is currently no mechanism for loading an in process client once the engine has started, however that is merely because the function hasn't been written. Both the ALSA and Solaris i/o clients are in process, but they are loaded when the engine starts up. - Steve |
From: Benno S. <be...@ga...> - 2002-11-07 16:04:01
|
Hi, (please read carefully, long mail :-) ) I'd like you to comment about the design of a simple RAM sample playback module that can later be incorporated into the signal path compiler. (Of course many other modules like FXes, modulators etc will be required for a real sample playback engine but the sampler module is fundamental here and it is very important that it is well designed, efficient and provides a good audio quality. At a later stage we can introduce the disk based sample playback module which will act in a similar way as the RAM version but with some limitations (eg loop point modulation will not be possibile etc) Now to my RAM sample module proposal, first see this simple diagram: http://linuxsampler.sourceforge.net/images/ramsamplemodule1.png Basically it contains no MIDI or keymapping capabilities because this will be delegated to other modules. The module allows note start/stop triggering , processing of looping information via lists of looping segments so that you can loop the sample in very flexible ways. Being RAM based, one could even modulate the loop points without any problem. Regarding the note On/Off triggers I would propose to use timestamped events which allow sample accurate triggering. For example without timestamped events, when using not so small buffer sizes, note starts/stop would get quantized to buffer boundaries introducing unacceptable timing artifacts. Where I am not so sure about using timestamped events or not is when modulating the pitch. The problem is we do need to handle both real time pitch messages like those generated by the pitch-bend MIDI controller and at the same time allow the pitch of the sampler module getting modulated by elements like LFOs, envelopes etc. In the latter case it is perhaps more efficient to provide a continuous stream of control values (one for each sample) (Steve does CV models work like that ?) Or perhaps both models can be used ? (a time stamped event that supplies an array of control values (eg bufsize=64 samples and we want to modulate the pitch in a continuous way so we simply generate an event that supplies 64 control values (floats) at the beginning of the processing of each fragment. This is a very important issue and I'd like the experts out here to give us the right advices. Another important issue is how to process looping information efficiently: --------|-------------------------|---- | | | start playback ptr end basically we need to figure out when the playback ptr goes past the loop end point and reset it to the loop start position. assume that the playback ptr gets increased by the pitch value during each iteration of the audio rendering loop. (pitch=1.0 sample gets played at nominal speed pitch=2.0 sample played one octave higher etc). if pitch remains constant for the entire audio fragment we can figure out at which iteration the playback ptr needs to be reset to the loop start position. That way we save CPU time and can avoid the statement if(sample_ptr >= loop_end_pos) reset_sample_ptr(); within audio rendering loop. When assuming that the pitch gets only changed by MIDI pitch bend events the above event based model works well since the pitch remains constant between two events. The problems arise when we let the pitch getting modulated with single sample resolution by other modules like LFOs and envelopes. Generating an event for each sample is too heavy in terms of CPU cycles and since external modules can modulate the pitch in an almost arbitray way, it becomes hard to estimate when the sample playback ptr needs to be reset to the loop start position. I see 3 solutions for this problem (I hope that you guys can come up with something more efficient if it exists): a) preserve the if(sample_ptr >= loop_end_pos) ... statement within the audio rendering loop , waste a bit of CPU but allow arbitrary pitch modulation, regardless if it is event based or driven by continuous values. b) limit the upward pitch modulation to let's say +5 octaves from the root note. (max pitch=32). That way cou can estimate when you will need to start to check if the loop end point was reached. assuming cycles_to_go = (loop_end_pos - playback_pos)*pitch with pitch=32 you waste CPU time in the sense that you need to perform the if() .. check for up to 32*samples_per_fragment. This is not that much since when running real time samples_per_fragment can be as low as 32 or 64 thus 64*32 = 2048. You perform the if() check 2048 out of possibily hundred thousands of times (assuming each sample is around 100k). This means the waste of CPU is only a few % while still allowing arbitrary modulation with some upward limits. (the limitation pitch up modulation of +5 octave) c) allow only linear ramping between two pitch events eg at each iteration you do: playback_ptr+=pitch; pitch+=delta_pitch; Complex pitch modulation would be emulated through many linear ramps, sending pitch events. The linear behaviour of the pitch lets you easily calculate the position where you need to reset the playback pointer to the loop_start position. So what do you think about a) , b) and c) ? Personally I prefer a) or b) , if a) does not waste that much CPU I'd like to use this method since it allows flexible pitch modulation. But probably the impact will not be negligible. Does a d) solution that is more efficient than the above ones exist ? Your thoughts and comments please. Below I'm responding to other issues raised in the last messages in order to avoid spamming the list too much: Steve, Josh: Regarding the IIWU synth I tried it today in conjunction with with the large fluid sound font: http://inanna.ecs.soton.ac.uk/~swh/fluid-unpacked/ IIWU/Fluid sounds quite nice but it seems to be quite CPU heavy. I took a look at the voice generation routines and it is all pretty much "hard coded" plus it uses integer math (integer,fractional parts etc) which is not that efficient as you might think. I tested it on my dual Celeron 366 and when playing MIDI files it often cannot keep up because the CPU load goes to 100%. The same MIDI file played in Timidity on the same box works ok without drop outs. I do not want to criticize IIWU here, I think the authors have done quite a nice work but I don't see it suitable as base for our "sampler constrution kit" or like Steve H. said "maybe the question should be whether it's easier to add disk streaming to FluidSynth". I know some of you want quick results or say "if we set the target too high we will not reach it and developers might loose interest etc", but I think the open source world still lack a very well thought out, flexible and efficient sampling engine and this takes some time. Sure, we can learn a lot from Fluid, perhaps turning it into a SoundFont playback module for LinuxSampler but I do not envision LinuxSampler as an extension of Fluid. Phil K. : Regarding the GUI , socket and DMIDI issues: As some of you said, it is better to separate GUI and (MIDI) real time control sockets. The GUI can easily use the real time socket to issue MIDI commands etc. I think we should provide an intermediate layer for handling these real time messages so that one can easily support multiple backends like DMIDI, alsa-seq, raw-midi, etc. Alex Klein: Hi, welcome on board. If you have good experience with widows audio sw samplers , in particular gigastudio this is ideal since you can do side-to-side comparisions, suggest improvements, check performance, etc. (especially since you said you have lots of spare time in the next months :-) ) Regarding VST support in linuxsampler, according to this: http://eca.cx/lad/2002/11/0109.html VST for Linux would need some modifications in the headers and asking Steinberg for the permission to redistribute the result. That given, you could quite easily port to Linux the DSP processing part of VST plugins where the source is available. The GUI is another issue and would probably require a complete rewrite except the author has used some cross-platform toolkit like Qt etc. But as you know the native plugin API for Linux si LADSPA and we will of course support it in LinuxSampler (mainly for FX processing). Regarding the JACK issues that Matthias W raised: I'm no JACK expert but I hope that JACK supports running a JACK client directly in it's own process space as it were a plugin. This would save unnecessary context switches since there would be only one SCHED_FIFO process runnnig. (Do the JACK ALSA I/O modules work that way ? ) cheers Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Phil K. <phi...@el...> - 2002-11-07 10:17:09
|
Nope, this gives us more flexibility being able to do this. -P On Thu, 2002-11-07 at 09:18, Steve Harris wrote: > On Thu, Nov 07, 2002 at 12:11:51 +0000, Phil Kerr wrote: > > > > I think most of the other controls of the engine will be MIDI based > > > data: notes, > > > > pitchbend, cc data. > > > > > > Right, but that is more of a realtime control thing, than a UI thing. > > > The > > > UI could use those (eg. for testing), but doesn't have to. MIDI can be > > > received either my alsa midi, or dmidi or whatever. > > > > It could be a mixture of both. I can alter, for example, filter setting from > > either the front panel or via sysex. > > Right, but nothing stops the UI from opening a DMIDI connection to the > engine as well as the UI socket. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: Steve H. <S.W...@ec...> - 2002-11-07 09:18:23
|
On Thu, Nov 07, 2002 at 12:11:51 +0000, Phil Kerr wrote: > > > I think most of the other controls of the engine will be MIDI based > > data: notes, > > > pitchbend, cc data. > > > > Right, but that is more of a realtime control thing, than a UI thing. > > The > > UI could use those (eg. for testing), but doesn't have to. MIDI can be > > received either my alsa midi, or dmidi or whatever. > > It could be a mixture of both. I can alter, for example, filter setting from > either the front panel or via sysex. Right, but nothing stops the UI from opening a DMIDI connection to the engine as well as the UI socket. - Steve |