You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
From: Josh G. <jg...@us...> - 2002-11-07 05:59:46
|
On Wed, 2002-11-06 at 09:44, Steve Harris wrote: > On Wed, Nov 06, 2002 at 09:35:33AM -0800, Josh Green wrote: > > peer jamming on LAN/internet). Of course much of this may fit well with > > the goals of LinuxSampler. iiwusynth (now called FluidSynth) may also > > have much to offer, although I'm not sure if the author's know about > > this project yet. I'll shoot an email over to the iiwusynth devel list. > > It would be a shame for all of us to re-invent the wheel and then find > > that the wheels don't even work together :) Cheers. > > Interesting, maybe the question should be whether its easier to add disk > streaming to FluidSynth, I played with iiwusynth a fews months back and it > seemed very impressive. > > - Steve > FluidSynth has been a little quite for a while (as well as my own project) but things seem to be picking up again. One of the developers, Markus Nentwig, is doing lots of optimization work. We were just recently making a list of future plans, one of them being sample streaming support (including disk streaming). I really think that Swami/FluidSynth are quite nice and that a lot of people are missing out by not trying them :) Cheers. Josh Green |
From: Frank N. <bea...@we...> - 2002-11-07 00:03:05
|
Hi, Benno wrote: [...] > Steve as suspected there are people that agree with me that when loading > AKAI samples > in RAM you can easily end up burning 256MB of RAM which is a lot for not > high-end PCs. > Let's see how the discussion evolves ... AKAI experts your saying ? First of all, hi :-). I joined the list last week but only lurked so far (and I am not sure if I'll able to contribute a lot to this project, but..oh well). Second, I am no AKAI expert, but at least I have an S2000 with 32 MB RAM at home (so I should be able to give some informations about how "the real thing" is done) and I also got a couple of sampling CDs for it. My main interest is to be able to use that beast in my MIDI setup at home, and that's what a small hobby project I pursue for quite a while now is focussed at (no comments here; I need to be able to release something first :). What I can add to this discussion right now is that of the sampling CDs I have here, most instrument sets are rather small; the largest I have don't even fill the 32MB RAM of the sampler; though, there are certainly much larger sampling collections out there. But when I had the opportunity to play a little with a Windows-based music system recently (using a Creamware Pulsar Scope board and Cubase) and checked out the software sampler modules that come with the Scope, I found that it typically expected sample sets no larger than 100 MB (I believe that was the fixed upper bound). All sampling CDs I could use there were also much smaller (per "instrument") than 32 MB. However, by creating several layers you could easily multiply the memory requirements by 2, 3 or 4. Frank PS: I recently found a nice sample resource@AKAI: ftp://ftp.akaipro.co.jp/pub/downloads They have a couple of soundsets for MPC2000XL and S5000/S6000. The few sets I managed to download so far sounded quite good (most even in stereo). There is one especially interesting piece which is a 190 MB zip archive of a stereo piano..sounds like a good test candidate :-). I was happy to see that Paul Kellett's AKAI file format information page should fully suffice to be able to parse the program file (*.AKP). |
From: Phil K. <phi...@el...> - 2002-11-06 22:54:34
|
Quoting Steve Harris <S.W...@ec...>: > On Wed, Nov 06, 2002 at 09:49:40 +0000, Phil Kerr wrote: > > Aaaah, I see :) > > > > So we have n number of engines and a GUI broadcasts a message to see > what's > > there. Each engine reponds with config data. > > I was thinking there would be a broker, and that could do the selection. > Broadcast is a bit problematic. > > Probably the only info needed would be that engine the UI is for, and > what version of the UI protocol it speaks (internal to the UI<->engine > connection), eg: > > <engine> > <type>linux-akai</type> > <protocol major="0" minor="1" micro="1" /> > </engine> > > Other stuff can be done between the UI and engine privatly, other things > dont need to be able to understand it. > Yes, this looks good. > > I think most of the other controls of the engine will be MIDI based > data: notes, > > pitchbend, cc data. > > Right, but that is more of a realtime control thing, than a UI thing. > The > UI could use those (eg. for testing), but doesn't have to. MIDI can be > received either my alsa midi, or dmidi or whatever. It could be a mixture of both. I can alter, for example, filter setting from either the front panel or via sysex. -P > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
From: Phil K. <phi...@el...> - 2002-11-06 22:33:38
|
Yes. Local MIDI control using ALSA shouldn't be a problem and it may work out best to have two, or more, ports for remote control. I'm not 100% if there are any issues with contex switching in the TCP stack but I don't see that there would be too many situations where you would be hammering the GUI and control ports at the same time. -P Quoting "Richard A. Smith" <rs...@bi...>: > On Wed, 6 Nov 2002 17:46:34 +0000, Steve Harris wrote: > > > > For real-time control DMIDI would be good as it allows hardware > control > > > and the syntax checking is minimal. This is where using XML instead > of > > > CC or SYSEX messages would be too heavy. > > > > Argh, yes! I think we were talking at cross purposes. I was just > thinking > > of the service discovery phase, obviously the control protocol wants > to be > > be binary nad lightweight. > > > > > I think it would be good if we could find a split point between what > can > > > be controlled by a GUI and what can be controlled by MIDI hardware, > even > > > though there's overlap to an extent. > > What kind of real-time GUI control are we talking about? > > Perhaps I'm showing my sampler ignorance here, but seems to me that > control via MIDI and control via socket are 2 sperate entities in 2 > seperate worlds with very different manners of operation. So why try > to handle them with the same system? > > If you wanted to send the engine say something like DMIDI data then > shouldn't that be on a seperate socket? I'm not sure that the > one-socket-do-it-all approach makes any sense. Most of the GUI stuff > is all patch uploading, layering control, asignment of channels, loop > points, graph setup, etc, all non-real time stuff. > > I suppose if you are controlling it via a software sequencer then > there would be a good bit of real-time type data but I would think > thats much better handled via the midi system rather than a general > purose GUI control port. > > > -- > Richard A. Smith Bitworks, Inc. > rs...@bi... 479.846.5777 x104 > > Sr. Design Engineer http://www.bitworks.com > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
From: Richard A. S. <rs...@bi...> - 2002-11-06 21:01:25
|
On Wed, 6 Nov 2002 17:46:34 +0000, Steve Harris wrote: > > For real-time control DMIDI would be good as it allows hardware control > > and the syntax checking is minimal. This is where using XML instead of > > CC or SYSEX messages would be too heavy. > > Argh, yes! I think we were talking at cross purposes. I was just thinking > of the service discovery phase, obviously the control protocol wants to be > be binary nad lightweight. > > > I think it would be good if we could find a split point between what can > > be controlled by a GUI and what can be controlled by MIDI hardware, even > > though there's overlap to an extent. What kind of real-time GUI control are we talking about? Perhaps I'm showing my sampler ignorance here, but seems to me that control via MIDI and control via socket are 2 sperate entities in 2 seperate worlds with very different manners of operation. So why try to handle them with the same system? If you wanted to send the engine say something like DMIDI data then shouldn't that be on a seperate socket? I'm not sure that the one-socket-do-it-all approach makes any sense. Most of the GUI stuff is all patch uploading, layering control, asignment of channels, loop points, graph setup, etc, all non-real time stuff. I suppose if you are controlling it via a software sequencer then there would be a good bit of real-time type data but I would think thats much better handled via the midi system rather than a general purose GUI control port. -- Richard A. Smith Bitworks, Inc. rs...@bi... 479.846.5777 x104 Sr. Design Engineer http://www.bitworks.com |
From: Steve H. <S.W...@ec...> - 2002-11-06 20:52:51
|
On Wed, Nov 06, 2002 at 09:49:40 +0000, Phil Kerr wrote: > Aaaah, I see :) > > So we have n number of engines and a GUI broadcasts a message to see what's > there. Each engine reponds with config data. I was thinking there would be a broker, and that could do the selection. Broadcast is a bit problematic. Probably the only info needed would be that engine the UI is for, and what version of the UI protocol it speaks (internal to the UI<->engine connection), eg: <engine> <type>linux-akai</type> <protocol major="0" minor="1" micro="1" /> </engine> Other stuff can be done between the UI and engine privatly, other things dont need to be able to understand it. > I think most of the other controls of the engine will be MIDI based data: notes, > pitchbend, cc data. Right, but that is more of a realtime control thing, than a UI thing. The UI could use those (eg. for testing), but doesn't have to. MIDI can be received either my alsa midi, or dmidi or whatever. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 20:32:27
|
Aaaah, I see :) So we have n number of engines and a GUI broadcasts a message to see what's there. Each engine reponds with config data. So, what data does the GUI need? An instance identifier. What MIDI chan. is it on. Patch details (sample info, adsr, filter, loop points). Effect details. JACK connection info. ..... Others The GUI would be greatly enhanced if it can display waveform data so there should be a mechanism of passing this from the engine. It doesn't need to be a dump of the sample data, just enough info to do editing. I think most of the other controls of the engine will be MIDI based data: notes, pitchbend, cc data. What about uploading samples from the GUI to the engine remotely? Thoughts? -P Quoting Steve Harris <S.W...@ec...>: > On Wed, Nov 06, 2002 at 05:34:21PM +0000, Phil Kerr wrote: > > For config data using XML has clear cross-platform advantages and > > XML-RPC is a lot lighter than SOAP and could serve our needs well. > > I was thinking of raw XML, not XML-RPC. > > > For real-time control DMIDI would be good as it allows hardware > control > > and the syntax checking is minimal. This is where using XML instead > of > > CC or SYSEX messages would be too heavy. > > Argh, yes! I think we were talking at cross purposes. I was just > thinking > of the service discovery phase, obviously the control protocol wants to > be > be binary nad lightweight. > > > I think it would be good if we could find a split point between what > can > > be controlled by a GUI and what can be controlled by MIDI hardware, > even > > though there's overlap to an extent. > > Yes, probably. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > |
From: Steve H. <S.W...@ec...> - 2002-11-06 17:46:45
|
On Wed, Nov 06, 2002 at 05:34:21PM +0000, Phil Kerr wrote: > For config data using XML has clear cross-platform advantages and > XML-RPC is a lot lighter than SOAP and could serve our needs well. I was thinking of raw XML, not XML-RPC. > For real-time control DMIDI would be good as it allows hardware control > and the syntax checking is minimal. This is where using XML instead of > CC or SYSEX messages would be too heavy. Argh, yes! I think we were talking at cross purposes. I was just thinking of the service discovery phase, obviously the control protocol wants to be be binary nad lightweight. > I think it would be good if we could find a split point between what can > be controlled by a GUI and what can be controlled by MIDI hardware, even > though there's overlap to an extent. Yes, probably. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-06 17:44:18
|
On Wed, Nov 06, 2002 at 09:35:33AM -0800, Josh Green wrote: > peer jamming on LAN/internet). Of course much of this may fit well with > the goals of LinuxSampler. iiwusynth (now called FluidSynth) may also > have much to offer, although I'm not sure if the author's know about > this project yet. I'll shoot an email over to the iiwusynth devel list. > It would be a shame for all of us to re-invent the wheel and then find > that the wheels don't even work together :) Cheers. Interesting, maybe the question should be whether its easier to add disk streaming to FluidSynth, I played with iiwusynth a fews months back and it seemed very impressive. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 17:42:10
|
Both approaches have advantages. The high-level, using XML, approach gives us a far richer dataset to work with. The low-level allows for better MIDI hardware support. There's probably a cut-off point where one method is better than the other. For config data using XML has clear cross-platform advantages and XML-RPC is a lot lighter than SOAP and could serve our needs well. For real-time control DMIDI would be good as it allows hardware control and the syntax checking is minimal. This is where using XML instead of CC or SYSEX messages would be too heavy. I think it would be good if we could find a split point between what can be controlled by a GUI and what can be controlled by MIDI hardware, even though there's overlap to an extent. Cheers -P On Wed, 2002-11-06 at 15:58, Steve Harris wrote: > On Wed, Nov 06, 2002 at 03:34:58PM +0000, Phil Kerr wrote: > > My idea's were similar but are lower level, more akin to ARP broadcasts. > > I think the higher level has advantages. > > > Although SOAP and WSDL are good choices in themselves they are heavy and > > having to include XML parsers does add bloat. I'm not sure if going > > down the CORBA road would be good but it does work. > > Yes, I wasn't suggesting SOAP or WSDL for connection level service > discovery, they are much too heavy, though that has nothing to do with > XML. I dont think you can describe the overhead of an XML parser > (especialy libxml) on a sampler engine as bloat. CORBA would be however. > > I would be nice if engines or guis could be written on any platform that > can support TCPIP and XML, without requreing the use of low level DMIDI > stuff. XML also removes the need for syntax descriptions and so on, you > just publish a DTD or schema. > > > John has started work on a coding guide to MWPP and we should be able to > > use his examples. Do we have access to any OSC code? > > Sure, the reference implementation is available, and John L. has > implemented support for it in saol IIRC. The origianl reference > implementation was a mess, but I think its been cleanedup and modernised > since then. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: Josh G. <jg...@us...> - 2002-11-06 17:33:23
|
On Wed, 2002-11-06 at 06:18, Steve Harris wrote: > On Tue, Nov 05, 2002 at 11:35:24PM -0800, Josh Green wrote: > > - Using GObject, a C based object system (GStreamer uses this). This > > gives us an object system with C the lowest common denominator > > programming language. GObject also has an easy python binding code > > generator that is part of the pygtk package. > > Cool. Does GObject have c++ bindings for the c++ nazis here ;) > GObject is part of glib 2.0 and is used by GTK+ 2.0. Therefore the gtkmm (C++ bindings for GTK) contains GObject bindings as well. I don't know the details of this though or whether the GObject stuff can be used by itself without the GTK dependency. > > - How well would the object system libInstPatch uses fit with a real > > time synthesizer? What kind of system could be used to associate > > synthesizer time critical data with patch objects? > > It would probably have to be abstracted in some way, the deal with RT-ish > audio software is you get handed a buffer big enough for 64 samples (for > example), and you have <<1 ms to fill it then return. This means, no > malloc, no file i/o, no serious data traversal. > I am also pretty familiar with the requirements of real time synthesis. That is why I was posing that question. Since patch parameters and objects are multi-thread locked for various operations they might not be very friendly to real time situations. That is why I was trying to think of some sort of parameter caching system, where each active preset has its own parameter space that the synth has direct access to and can define its own values (it would need this for its internal variables anyways). This could then be synchronized with the object system. I perceive it as a trade off between patch parameter flexibility and speed. Creating 2 systems that synchronize with each other could give us the best of both worlds. > I would imagine that most engines will preprepare large linear sample > buffers of the audio that there most likly to be required to play, then > when there RT callback gets called they can just copy from the prepared > buffers (maybe apply post processing effects if they have RT controls) > and then return. > Streaming from disk is I think the answer. Of course small samples could be cached in RAM (Linux does this for us to some extent anyways). What I'm concerned with currently is the patch information and parameters and its real time response in querying it (what the synth engine would be doing). > If the post processing only contains effects that are controlled by > time (static envelopes, retriggered LFOs etc.), not by user interaction > (MIDI, sliders, dynamic enveopes,whatever) then they could be applied > ahead of time. But maybe that is too special a case. > Well I suppose there might be some optimization that could occur when there aren't any real time controls connected to a synthesis parameter. SoundFont uses modulators to connect parameters to MIDI controls. > - Steve > BTW if anyone has yet to try my program Swami (http://swami.sourceforge.net) you may want to do so :) It currently uses iiwusynth as its software synthesizer which has a lot of the features we are talking about but uses SoundFont files as its basis synthesis format. Modulator support is one of the things already working in iiwusynth (even loop points can now be modulated, but in CVS of Swami and iiwusynth only). The underlying architecture of Swami is all GObject and in my opinion very flexible and powerful (plugins, wavetable objects, etc). I have many plans to create instrument patch oriented applications with it (Python binding, online web patch database, multi peer jamming on LAN/internet). Of course much of this may fit well with the goals of LinuxSampler. iiwusynth (now called FluidSynth) may also have much to offer, although I'm not sure if the author's know about this project yet. I'll shoot an email over to the iiwusynth devel list. It would be a shame for all of us to re-invent the wheel and then find that the wheels don't even work together :) Cheers. Josh Green |
From: Steve H. <S.W...@ec...> - 2002-11-06 15:58:09
|
On Wed, Nov 06, 2002 at 03:34:58PM +0000, Phil Kerr wrote: > My idea's were similar but are lower level, more akin to ARP broadcasts. I think the higher level has advantages. > Although SOAP and WSDL are good choices in themselves they are heavy and > having to include XML parsers does add bloat. I'm not sure if going > down the CORBA road would be good but it does work. Yes, I wasn't suggesting SOAP or WSDL for connection level service discovery, they are much too heavy, though that has nothing to do with XML. I dont think you can describe the overhead of an XML parser (especialy libxml) on a sampler engine as bloat. CORBA would be however. I would be nice if engines or guis could be written on any platform that can support TCPIP and XML, without requreing the use of low level DMIDI stuff. XML also removes the need for syntax descriptions and so on, you just publish a DTD or schema. > John has started work on a coding guide to MWPP and we should be able to > use his examples. Do we have access to any OSC code? Sure, the reference implementation is available, and John L. has implemented support for it in saol IIRC. The origianl reference implementation was a mess, but I think its been cleanedup and modernised since then. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 15:42:48
|
Thanks Steve, (note to self, press reply to all not reply ;) My idea's were similar but are lower level, more akin to ARP broadcasts. An app broadcasts the DMIDI meta-command for 'who's there' and all devices reply. The session could look something like: Enquirer -> ff:ff:00:00 Device 1 -> ff:ff:00:01 [node id] description Device 2 -> ff:ff:00:01 [node id] description The exact format hasn't been 100% defined yet but it should be close to that of above. I've some very basic code for this but it hasn't been integrated into anything fully working. Although SOAP and WSDL are good choices in themselves they are heavy and having to include XML parsers does add bloat. I'm not sure if going down the CORBA road would be good but it does work. John has started work on a coding guide to MWPP and we should be able to use his examples. Do we have access to any OSC code? -P On Wed, 2002-11-06 at 14:58, Steve Harris wrote: > [putting this back on the list] > > On Wed, Nov 06, 2002 at 02:26:40PM +0000, Phil Kerr wrote: > > The biggest difference between DMIDI and MWPP is it's networking scope. > > MWPP has been optimised for Internet transmission whereas DMIDI is > > optimised for LAN's and is much closer to raw MIDI. The device > > identification schema also follows MIDI closer than MWPP which uses > > SIP. > > OK, makes sense. I gues that means we should support both, and probably > OSC, as that seems quite popular and has better resultion than MIDI. > It is also UDP based IIRC. > > All incoming event protocols should be normalised in some way IMHO. > > > You mentioned a service discovery mechanism, do you mean some form of > > application introspection that can be broadcast? This is an interesting > > area I've been doing some thinking about. Any further thoughts in this > > area? > > Yes, thats it. I've done some work in this area. > > There are two forms of serveice discovery, syntaic (API level) and > semantic (er, harder to define ;) but we probably dont need it). > > There are some existing standards, eg. WSDL (web sevice description > language, http://www.w3.org/TR/wsdl), but they tend to be SOAP (XML-RPC) > biased, and are probably overkill for what we need unless you want to > describe specific services, eg. an envelope editor. > > Basicly the deal is you have either a well know broker or broadcast your > requirements ("I'm an Akai GUI, protocol v3.4, I want an engine"), and you > get back a list of candidates. XML is your friend here. > > The scheme I would suggest goes something like: > > Broker fires up > Engines reister with Broker (easy if the top level engine is also the Broker) > UI starts, queries Broker > Broker queries each engine in turn to find out if it can handle the UI > Broker retrurns list to UI > > This also scales to multiple Engines on multiple machines, it just means > that the Engine has to register with an external Broker. > > If the Engines are external, but they hold open a socket to the Broker > then they can tell if it sudenly goes away (or vice versa), which helps. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: Steve H. <S.W...@ec...> - 2002-11-06 14:59:00
|
[putting this back on the list] On Wed, Nov 06, 2002 at 02:26:40PM +0000, Phil Kerr wrote: > The biggest difference between DMIDI and MWPP is it's networking scope. > MWPP has been optimised for Internet transmission whereas DMIDI is > optimised for LAN's and is much closer to raw MIDI. The device > identification schema also follows MIDI closer than MWPP which uses > SIP. OK, makes sense. I gues that means we should support both, and probably OSC, as that seems quite popular and has better resultion than MIDI. It is also UDP based IIRC. All incoming event protocols should be normalised in some way IMHO. > You mentioned a service discovery mechanism, do you mean some form of > application introspection that can be broadcast? This is an interesting > area I've been doing some thinking about. Any further thoughts in this > area? Yes, thats it. I've done some work in this area. There are two forms of serveice discovery, syntaic (API level) and semantic (er, harder to define ;) but we probably dont need it). There are some existing standards, eg. WSDL (web sevice description language, http://www.w3.org/TR/wsdl), but they tend to be SOAP (XML-RPC) biased, and are probably overkill for what we need unless you want to describe specific services, eg. an envelope editor. Basicly the deal is you have either a well know broker or broadcast your requirements ("I'm an Akai GUI, protocol v3.4, I want an engine"), and you get back a list of candidates. XML is your friend here. The scheme I would suggest goes something like: Broker fires up Engines reister with Broker (easy if the top level engine is also the Broker) UI starts, queries Broker Broker queries each engine in turn to find out if it can handle the UI Broker retrurns list to UI This also scales to multiple Engines on multiple machines, it just means that the Engine has to register with an external Broker. If the Engines are external, but they hold open a socket to the Broker then they can tell if it sudenly goes away (or vice versa), which helps. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-06 14:18:58
|
On Tue, Nov 05, 2002 at 11:35:24PM -0800, Josh Green wrote: > - Using GObject, a C based object system (GStreamer uses this). This > gives us an object system with C the lowest common denominator > programming language. GObject also has an easy python binding code > generator that is part of the pygtk package. Cool. Does GObject have c++ bindings for the c++ nazis here ;) > - How well would the object system libInstPatch uses fit with a real > time synthesizer? What kind of system could be used to associate > synthesizer time critical data with patch objects? It would probably have to be abstracted in some way, the deal with RT-ish audio software is you get handed a buffer big enough for 64 samples (for example), and you have <<1 ms to fill it then return. This means, no malloc, no file i/o, no serious data traversal. I would imagine that most engines will preprepare large linear sample buffers of the audio that there most likly to be required to play, then when there RT callback gets called they can just copy from the prepared buffers (maybe apply post processing effects if they have RT controls) and then return. If the post processing only contains effects that are controlled by time (static envelopes, retriggered LFOs etc.), not by user interaction (MIDI, sliders, dynamic enveopes,whatever) then they could be applied ahead of time. But maybe that is too special a case. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-06 14:08:01
|
On Wed, Nov 06, 2002 at 02:08:07PM +0100, Benno Senoner wrote: > Regarding the sampler->GUI protocol I think is is wise to use a > TCP socket for this since > it saves you from the hassles of handling lost packets and since on a > LAN TCP transmission errors > show up very rarely, I think the speed that can be achieved (for > GUI purposes) is more than > adequate and most of times the round-trip latency will be below 1msec. I agree. I also dont think it is neccesary or desirable to define the on-the-wire protocl to the action level. Some sampler engines will have things we havn't though of, some will implement things in incompatoible ways. I think its enough to say, it will be TCP (or UDP) and maybe define a service discovery mechanism. Re. the IP MIDI thing, whats wrong with the RTP MIDI standard, that's currently in the pre-RFC stage? RTP seems more sensible that multitransmit UPD to me. > Regarding using CV I am not 100% sure about that, sure it simplifies > things a bit but I think > that in some cases we loose efficiency and flexibility. How so? We would gain flexibility, and the efficiency issue is not that simple. Using event based systems its very hard to express arbitrary modulations. NB I'm only advocating CV for the processing part of the signal path, the sample based signal generation, obviously needs to be event based. I seriously doubt there are any sample engines out there that are event based internally (for the post processing part), it would make your life very hard. In any case, if we use sub engines for each sample format, its up to each sub engine how its implemented internally. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-06 13:55:32
|
On Wed, Nov 06, 2002 at 11:47:12AM +0000, Phil Kerr wrote: > Hi Benno, > > 100mb ethernet..... Oh that's so old fashioned, WiFi is better :) Actually, wifi has a pretty high latency, try running X over it sometime, there is a detectable delay, even though the throughput is high enough. - Steve |
From: Phil K. <phi...@el...> - 2002-11-06 11:55:00
|
Hi Benno, 100mb ethernet..... Oh that's so old fashioned, WiFi is better :) That's the goal of DMIDI, it allows any platform to communicate with anything. This really would allow LinuxSampler to gain a good foothold in non Linux studio's if it can be controlled from exiting applications or hardware. There's always a balance between using TCP and UDP, but there's a great deal of overlap especially when we are talking about LAN environments. I've been testing networks for a while and even on our POS 10mb LAN in work I've not dropped a packet but TCP does give you an extra level of protection. RTT's within a lan would be good enough to use TCP but IIRC it does add a small delay (0.5 ms) DMIDI uses raw UDP as RTP would only tell you how late a packet is. UDP is also supported by a wider range of languages and hardware (important for me) The only time you need protection is when you try and stream over the internet and this is where John Lazzaro has done good work. When describing ideal DMIDI networking conditions I use the analogy of audio cabling. Our target audience of non-technical musicians understand this concept well. Double sending UDP packets works quite well and doesn't seem to add much network overhead. MIDI doesn't have any protection mechanism and it works well. It may be good if people checkout SpiralSynth and DMIDI to get a taste of how it works: www.dmidi.org. Note. I get playback delays with SpiralSynth itself of 200-400ms quite often, test this by using a qwerty keyboard. The plan is to add more remote control to SSM and then update SpiralSynth later. Cheers Phil On Wed, 2002-11-06 at 13:08, Benno Senoner wrote: > (discussion about DMIDI and GUI/Sampler communication) > > Hi Phil, > I took a brief look at DMIDI and it looks interesting. > Of course it would be very handy to add DMIDI support to LinuxSampler. > Immagine several rackmount PCs running the sampler connected to a > 100Mbit LAN > which allows you to drive the sampler even via a Windows/Mac client > (Cubase,Logic) with > with a DMIDI output driver. > > Regarding the sampler->GUI protocol I think is is wise to use a > TCP socket for this since > it saves you from the hassles of handling lost packets and since on a > LAN TCP transmission errors > show up very rarely, I think the speed that can be achieved (for > GUI purposes) is more than > adequate and most of times the round-trip latency will be below 1msec. > > For real time messages like MIDI UDP is of course much better but I'd > like it to be somewhat > "safe". (eg it is not nice to hear a group of hanging notes due to a > lost packet). > > Can DMIDI currently deal with errors ? What protocol do you use ? RTP or > raw UDP ? > Not sure if the paradigm of "better late than never" fits into MIDI but > as said missing notes, > controller msgs etc can do much more damage than late events. > So I think we need a bit of redundancy when sending MIDI events. > In the case you are using raw UDP, how about simply sending two > identical packets for each > group of events and having the host detecting and discarding the > duplicates ? > As said before a lost packet on a non-broken LAN happens very seldom > (except in extremely > congested networks but that is not the typical case of a LAN used for > MIDI transmission). > Thus commonsense would say me that if we send two packets instead of one > the probabilty > that both don't come through is very very low. > ( in an uniform error distribution Pboth_packets_losts=Ppacket1_lost * > Ppacket2_lost) > > I think this is a very interesting topic and I'd like you to share your > thoughts with us. > > Regarding using CV I am not 100% sure about that, sure it simplifies > things a bit but I think > that in some cases we loose efficiency and flexibility. > > I'm currently designing an efficient RAM sampler object that can fit > within the modular and > recompilable sampler concept. (using time stamps for maximum precision > and efficiency). > I will post a diagram and a description in 1-2 days so that the list can > comment on and correct > eventual design mistakes. > > cheers, > Benno > > -- > http://linuxsampler.sourceforge.net > Building a professional grade software sampler for Linux. > Please help us designing and developing it. > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
From: Benno S. <be...@ga...> - 2002-11-06 11:05:10
|
(discussion about DMIDI and GUI/Sampler communication) Hi Phil, I took a brief look at DMIDI and it looks interesting. Of course it would be very handy to add DMIDI support to LinuxSampler. Immagine several rackmount PCs running the sampler connected to a 100Mbit LAN which allows you to drive the sampler even via a Windows/Mac client (Cubase,Logic) with with a DMIDI output driver. Regarding the sampler->GUI protocol I think is is wise to use a TCP socket for this since it saves you from the hassles of handling lost packets and since on a LAN TCP transmission errors show up very rarely, I think the speed that can be achieved (for GUI purposes) is more than adequate and most of times the round-trip latency will be below 1msec. For real time messages like MIDI UDP is of course much better but I'd like it to be somewhat "safe". (eg it is not nice to hear a group of hanging notes due to a lost packet). Can DMIDI currently deal with errors ? What protocol do you use ? RTP or raw UDP ? Not sure if the paradigm of "better late than never" fits into MIDI but as said missing notes, controller msgs etc can do much more damage than late events. So I think we need a bit of redundancy when sending MIDI events. In the case you are using raw UDP, how about simply sending two identical packets for each group of events and having the host detecting and discarding the duplicates ? As said before a lost packet on a non-broken LAN happens very seldom (except in extremely congested networks but that is not the typical case of a LAN used for MIDI transmission). Thus commonsense would say me that if we send two packets instead of one the probabilty that both don't come through is very very low. ( in an uniform error distribution Pboth_packets_losts=Ppacket1_lost * Ppacket2_lost) I think this is a very interesting topic and I'd like you to share your thoughts with us. Regarding using CV I am not 100% sure about that, sure it simplifies things a bit but I think that in some cases we loose efficiency and flexibility. I'm currently designing an efficient RAM sampler object that can fit within the modular and recompilable sampler concept. (using time stamps for maximum precision and efficiency). I will post a diagram and a description in 1-2 days so that the list can comment on and correct eventual design mistakes. cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: <chr...@ep...> - 2002-11-06 10:36:21
|
On Mon, Nov 04, 2002 at 10:10:21PM +0000, Steve Harris wrote: > In both cases, if linuxsampler is in-process of a host app (let's say a > sequencer) or if it's running in a seperate process there would be the > same context switch to another client. So the only benefit would be tha= t > there is no process context switch between the host application and the= > sampler, but still a thread context switch. Steve, does the cache > clearing also apply to thread switches? Depends on the kind of thread you're using. User level threads don't flush the cache on a context switch. About kernel level threads I'm not really sure, because there is a also a mode switch (user mode <-> kernel mode) whenever a context switch happens on this kind of threads. But I wouldn't put the sampler in another 'host' application's process anyway. ________________________________________ Online Fotoalben - jetzt kostenlos bei http://www.ePost.de |
From: Phil K. <phi...@el...> - 2002-11-06 08:05:27
|
Hi, Although it's probably slightly too early to discuss this in detail it may be useful to map out some basic functionality between the GUI and engine. Some ideas (please feel free to add) Sample select Loop start/endpoints ADSR Main volume Tuning Filter selection/cutoff/resonance Effect selection/level ALSA/JACK config ..... What protocol do we use to communicate between the two? UDP is a better choice than TCP due to it's speed. I presume that it will be used on a LAN so typical delay's should be under 2ms. Using DMIDI would allow MIDI software/hardware to interact with the engine and it's well on it's way to become an IEEE standard. Basic functionality can be obtained by mapping control change messages to functions. I added ALSA 0.9 and DMIDI code to SpiralSynth and it was really easy to do this and should fit in to the CV model that Steve Harris suggested. More complex data can be transmitted using a SYSEX like format. The downside to this is MIDI's low bit resolution, upside is almost anything can control it. Thoughts? Phil |
From: Josh G. <jg...@us...> - 2002-11-06 07:32:58
|
Went on a trip for a few days and come back to find my email box full of discussions about a patch loading library. What follows are details concerning my libInstPatch library and its design goals. If you haven't had a look at my API for libInstPatch (http://swami.sourceforge.net) here is my current plans for it. I am very curious how well this project could fit in with the LinuxSampler project (actually I'm also wondering if Swami could become a GUI for it, since I'm planning on fully objectifying the GUI as well, it is pretty much already, just not the interface). - Using GObject, a C based object system (GStreamer uses this). This gives us an object system with C the lowest common denominator programming language. GObject also has an easy python binding code generator that is part of the pygtk package. - Not attempt to make a unified patch format (sounds like others agree) - Take advantage of GObject's typed property system for setting parameters on patches, this provides for a bit of unified API between patch formats, example of setting parameters on a SoundFont preset in C: g_object_set (preset, "name", "Preset Foo", "bank", 0, "psetnum", 4, NULL); - Patch formats are broken out into individual GObjects (IPatchSFont, IPatchSFPreset, IPatchSFInst, IPatchSFZone and IPatchSFSample for SoundFont banks for example) and organized in a parent/child tree structure where appropriate. - Multi-threaded patch objects - A sample store object that provides for pluggable methods of storing sample data (RAM, swap file, etc) I believe this design will provide for a rather flexible patch file library and other formats can be easily added to it. Some things to think about: - My new libInstPatch actually needs a bit of testing and debugging as I just recently completed the API and have not brought the new development version of Swami up to date to actually use it. - Loading/saving API is not flexible enough (hasn't been re-written yet for libInstPatch which is now no longer SoundFont centric) - How well would the object system libInstPatch uses fit with a real time synthesizer? What kind of system could be used to associate synthesizer time critical data with patch objects? - Multi-threaded objects: while it makes server/multiple client architectures possible it also adds excess locking requirements etc. For example: all object lists must be locked before iterating over them in libInstPatch, if not using the user iterator routines which make a copy of object lists. - I decided not to deal with the converting of audio data with libInstPatch although I'm sure this will be necessary as more formats are added. Perhaps a sample library like libsndfile could be used for such things, I know the author is interested in this stuff. My library has pretty much been designed around the idea of the most flexibility in the patch file realm and act like a patch server that can have multiple clients editing the same patches (think distributed patch editing sessions, between programs or computers). I have not really thought about the real time realm required by soft synths. It may be that my design cannot incorporate whats required by LinuxSampler, but I would really like it to. This project sounds really exciting. I'm just hoping I can keep up with the massive amount of email these discussions involve (and actually get some programming done at the same time :) Cheers. Josh Green |
From: Steve H. <S.W...@ec...> - 2002-11-05 22:39:52
|
On Tue, Nov 05, 2002 at 09:04:04 +0100, Matthias Weiss wrote: > The situation of a context switch before the sampler finished its work > for the given jack cycle depends whether linuxsampler runs in > SCHED_FIFO mode and if there is a higher priority process preempting the > sampler process. This shouldn't happen within one jack graph, but it > could happen if there are parallel jack graphs. Its still a needless overhead. > In both cases, if linuxsampler is in-process of a host app (let's say a > sequencer) or if it's running in a seperate process there would be the > same context switch to another client. So the only benefit would be that > there is no process context switch between the host application and the > sampler, but still a thread context switch. Steve, does the cache > clearing also apply to thread switches? I dont know if it applies to threads as well. - Steve |
From: Matthias W. <mat...@in...> - 2002-11-05 20:07:22
|
On Mon, Nov 04, 2002 at 10:10:21PM +0000, Steve Harris wrote: > > Well this means we have to provide GUI implementations for every graphic > > toolkit that is used by the available sequencers. > > If it's right that processes and threads are handled very similar in the > > Linux kernel there should be not alot of a performance difference > > between in-process and out-of-process model, anyone knows more about > > that? > > One idea was that linuxsampler UI's would communicate with the main engine > over a (non X) socket of some kind. I see, this complicates the implementation but would have the benefit that a crashing GUI wouldn't ruin the recording. It also forces a clean seperation off GUI code and engine code. > > The problem with out-of-context is the the cache has to be cleared and > refilled (well the part touched, and I imagine a big sample set would use > a lot of cache) and the context switch time is small, but non-zero. As you > pointied out we dont have very long to write out 64 samples for every > active sample of every active note. The situation of a context switch before the sampler finished its work for the given jack cycle depends whether linuxsampler runs in SCHED_FIFO mode and if there is a higher priority process preempting the sampler process. This shouldn't happen within one jack graph, but it could happen if there are parallel jack graphs. In both cases, if linuxsampler is in-process of a host app (let's say a sequencer) or if it's running in a seperate process there would be the same context switch to another client. So the only benefit would be that there is no process context switch between the host application and the sampler, but still a thread context switch. Steve, does the cache clearing also apply to thread switches? > > > > regarding the AKAI samples: Steve says akai samplers were quite limited > > > in terms or RAM availabilty (32-64MB) > > > and since akai samplers allow some funny stuff like modulating the loop > > > points I was wondering what you thing about not > > > using disk streaming for this format. > > > > Or caching enough audio data that covers the modulation range which > > might impact RAM usage. > > The 3000 series were limited to 32meg, generaly the samples were small, > but in either case the point is that the optimal implementation isn't disk > streamed. Its just an example though, dont get hung up on AKAIs. Hehe, I'm not hung up on AKAI's in anyway ;-), rather stick with software samplers :)) . matthias |
From: Matthias W. <mat...@in...> - 2002-11-05 19:21:54
|
On Mon, Nov 04, 2002 at 10:26:32PM +0000, Steve Harris wrote: > > In order to provide the whole features that a sample format provides, we > > have to represent the parameters in linuxsampler. But that means we > > allready have a "grand unified sample" system. > > We dont have to do that, we can have format specific engines, the question > is whether its a good idea or not. Which means reimplementing similar code for every engine as you pointed out before. Maybe its possible to extract some kind of a greatest common divisor that builds the skeleton for every engine. > > Benno's plan to use dynamic compilation units would make the engines quick > to construct, they won't be as fast as ticghtly hand coded engines, but it > may be worth it for the RAD features. Hm, I recognized, that it's not obvious to me what parts should be dynamically compiled. Benno, could you explain in more detail? matthias |