You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(14) |
Dec
|
|
From: Steve H. <S.W...@ec...> - 2002-11-06 15:58:09
|
On Wed, Nov 06, 2002 at 03:34:58PM +0000, Phil Kerr wrote: > My idea's were similar but are lower level, more akin to ARP broadcasts. I think the higher level has advantages. > Although SOAP and WSDL are good choices in themselves they are heavy and > having to include XML parsers does add bloat. I'm not sure if going > down the CORBA road would be good but it does work. Yes, I wasn't suggesting SOAP or WSDL for connection level service discovery, they are much too heavy, though that has nothing to do with XML. I dont think you can describe the overhead of an XML parser (especialy libxml) on a sampler engine as bloat. CORBA would be however. I would be nice if engines or guis could be written on any platform that can support TCPIP and XML, without requreing the use of low level DMIDI stuff. XML also removes the need for syntax descriptions and so on, you just publish a DTD or schema. > John has started work on a coding guide to MWPP and we should be able to > use his examples. Do we have access to any OSC code? Sure, the reference implementation is available, and John L. has implemented support for it in saol IIRC. The origianl reference implementation was a mess, but I think its been cleanedup and modernised since then. - Steve |
|
From: Phil K. <phi...@el...> - 2002-11-06 15:42:48
|
Thanks Steve, (note to self, press reply to all not reply ;) My idea's were similar but are lower level, more akin to ARP broadcasts. An app broadcasts the DMIDI meta-command for 'who's there' and all devices reply. The session could look something like: Enquirer -> ff:ff:00:00 Device 1 -> ff:ff:00:01 [node id] description Device 2 -> ff:ff:00:01 [node id] description The exact format hasn't been 100% defined yet but it should be close to that of above. I've some very basic code for this but it hasn't been integrated into anything fully working. Although SOAP and WSDL are good choices in themselves they are heavy and having to include XML parsers does add bloat. I'm not sure if going down the CORBA road would be good but it does work. John has started work on a coding guide to MWPP and we should be able to use his examples. Do we have access to any OSC code? -P On Wed, 2002-11-06 at 14:58, Steve Harris wrote: > [putting this back on the list] > > On Wed, Nov 06, 2002 at 02:26:40PM +0000, Phil Kerr wrote: > > The biggest difference between DMIDI and MWPP is it's networking scope. > > MWPP has been optimised for Internet transmission whereas DMIDI is > > optimised for LAN's and is much closer to raw MIDI. The device > > identification schema also follows MIDI closer than MWPP which uses > > SIP. > > OK, makes sense. I gues that means we should support both, and probably > OSC, as that seems quite popular and has better resultion than MIDI. > It is also UDP based IIRC. > > All incoming event protocols should be normalised in some way IMHO. > > > You mentioned a service discovery mechanism, do you mean some form of > > application introspection that can be broadcast? This is an interesting > > area I've been doing some thinking about. Any further thoughts in this > > area? > > Yes, thats it. I've done some work in this area. > > There are two forms of serveice discovery, syntaic (API level) and > semantic (er, harder to define ;) but we probably dont need it). > > There are some existing standards, eg. WSDL (web sevice description > language, http://www.w3.org/TR/wsdl), but they tend to be SOAP (XML-RPC) > biased, and are probably overkill for what we need unless you want to > describe specific services, eg. an envelope editor. > > Basicly the deal is you have either a well know broker or broadcast your > requirements ("I'm an Akai GUI, protocol v3.4, I want an engine"), and you > get back a list of candidates. XML is your friend here. > > The scheme I would suggest goes something like: > > Broker fires up > Engines reister with Broker (easy if the top level engine is also the Broker) > UI starts, queries Broker > Broker queries each engine in turn to find out if it can handle the UI > Broker retrurns list to UI > > This also scales to multiple Engines on multiple machines, it just means > that the Engine has to register with an external Broker. > > If the Engines are external, but they hold open a socket to the Broker > then they can tell if it sudenly goes away (or vice versa), which helps. > > - Steve > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Steve H. <S.W...@ec...> - 2002-11-06 14:59:00
|
[putting this back on the list] On Wed, Nov 06, 2002 at 02:26:40PM +0000, Phil Kerr wrote: > The biggest difference between DMIDI and MWPP is it's networking scope. > MWPP has been optimised for Internet transmission whereas DMIDI is > optimised for LAN's and is much closer to raw MIDI. The device > identification schema also follows MIDI closer than MWPP which uses > SIP. OK, makes sense. I gues that means we should support both, and probably OSC, as that seems quite popular and has better resultion than MIDI. It is also UDP based IIRC. All incoming event protocols should be normalised in some way IMHO. > You mentioned a service discovery mechanism, do you mean some form of > application introspection that can be broadcast? This is an interesting > area I've been doing some thinking about. Any further thoughts in this > area? Yes, thats it. I've done some work in this area. There are two forms of serveice discovery, syntaic (API level) and semantic (er, harder to define ;) but we probably dont need it). There are some existing standards, eg. WSDL (web sevice description language, http://www.w3.org/TR/wsdl), but they tend to be SOAP (XML-RPC) biased, and are probably overkill for what we need unless you want to describe specific services, eg. an envelope editor. Basicly the deal is you have either a well know broker or broadcast your requirements ("I'm an Akai GUI, protocol v3.4, I want an engine"), and you get back a list of candidates. XML is your friend here. The scheme I would suggest goes something like: Broker fires up Engines reister with Broker (easy if the top level engine is also the Broker) UI starts, queries Broker Broker queries each engine in turn to find out if it can handle the UI Broker retrurns list to UI This also scales to multiple Engines on multiple machines, it just means that the Engine has to register with an external Broker. If the Engines are external, but they hold open a socket to the Broker then they can tell if it sudenly goes away (or vice versa), which helps. - Steve |
|
From: Steve H. <S.W...@ec...> - 2002-11-06 14:18:58
|
On Tue, Nov 05, 2002 at 11:35:24PM -0800, Josh Green wrote: > - Using GObject, a C based object system (GStreamer uses this). This > gives us an object system with C the lowest common denominator > programming language. GObject also has an easy python binding code > generator that is part of the pygtk package. Cool. Does GObject have c++ bindings for the c++ nazis here ;) > - How well would the object system libInstPatch uses fit with a real > time synthesizer? What kind of system could be used to associate > synthesizer time critical data with patch objects? It would probably have to be abstracted in some way, the deal with RT-ish audio software is you get handed a buffer big enough for 64 samples (for example), and you have <<1 ms to fill it then return. This means, no malloc, no file i/o, no serious data traversal. I would imagine that most engines will preprepare large linear sample buffers of the audio that there most likly to be required to play, then when there RT callback gets called they can just copy from the prepared buffers (maybe apply post processing effects if they have RT controls) and then return. If the post processing only contains effects that are controlled by time (static envelopes, retriggered LFOs etc.), not by user interaction (MIDI, sliders, dynamic enveopes,whatever) then they could be applied ahead of time. But maybe that is too special a case. - Steve |
|
From: Steve H. <S.W...@ec...> - 2002-11-06 14:08:01
|
On Wed, Nov 06, 2002 at 02:08:07PM +0100, Benno Senoner wrote: > Regarding the sampler->GUI protocol I think is is wise to use a > TCP socket for this since > it saves you from the hassles of handling lost packets and since on a > LAN TCP transmission errors > show up very rarely, I think the speed that can be achieved (for > GUI purposes) is more than > adequate and most of times the round-trip latency will be below 1msec. I agree. I also dont think it is neccesary or desirable to define the on-the-wire protocl to the action level. Some sampler engines will have things we havn't though of, some will implement things in incompatoible ways. I think its enough to say, it will be TCP (or UDP) and maybe define a service discovery mechanism. Re. the IP MIDI thing, whats wrong with the RTP MIDI standard, that's currently in the pre-RFC stage? RTP seems more sensible that multitransmit UPD to me. > Regarding using CV I am not 100% sure about that, sure it simplifies > things a bit but I think > that in some cases we loose efficiency and flexibility. How so? We would gain flexibility, and the efficiency issue is not that simple. Using event based systems its very hard to express arbitrary modulations. NB I'm only advocating CV for the processing part of the signal path, the sample based signal generation, obviously needs to be event based. I seriously doubt there are any sample engines out there that are event based internally (for the post processing part), it would make your life very hard. In any case, if we use sub engines for each sample format, its up to each sub engine how its implemented internally. - Steve |
|
From: Steve H. <S.W...@ec...> - 2002-11-06 13:55:32
|
On Wed, Nov 06, 2002 at 11:47:12AM +0000, Phil Kerr wrote: > Hi Benno, > > 100mb ethernet..... Oh that's so old fashioned, WiFi is better :) Actually, wifi has a pretty high latency, try running X over it sometime, there is a detectable delay, even though the throughput is high enough. - Steve |
|
From: Phil K. <phi...@el...> - 2002-11-06 11:55:00
|
Hi Benno, 100mb ethernet..... Oh that's so old fashioned, WiFi is better :) That's the goal of DMIDI, it allows any platform to communicate with anything. This really would allow LinuxSampler to gain a good foothold in non Linux studio's if it can be controlled from exiting applications or hardware. There's always a balance between using TCP and UDP, but there's a great deal of overlap especially when we are talking about LAN environments. I've been testing networks for a while and even on our POS 10mb LAN in work I've not dropped a packet but TCP does give you an extra level of protection. RTT's within a lan would be good enough to use TCP but IIRC it does add a small delay (0.5 ms) DMIDI uses raw UDP as RTP would only tell you how late a packet is. UDP is also supported by a wider range of languages and hardware (important for me) The only time you need protection is when you try and stream over the internet and this is where John Lazzaro has done good work. When describing ideal DMIDI networking conditions I use the analogy of audio cabling. Our target audience of non-technical musicians understand this concept well. Double sending UDP packets works quite well and doesn't seem to add much network overhead. MIDI doesn't have any protection mechanism and it works well. It may be good if people checkout SpiralSynth and DMIDI to get a taste of how it works: www.dmidi.org. Note. I get playback delays with SpiralSynth itself of 200-400ms quite often, test this by using a qwerty keyboard. The plan is to add more remote control to SSM and then update SpiralSynth later. Cheers Phil On Wed, 2002-11-06 at 13:08, Benno Senoner wrote: > (discussion about DMIDI and GUI/Sampler communication) > > Hi Phil, > I took a brief look at DMIDI and it looks interesting. > Of course it would be very handy to add DMIDI support to LinuxSampler. > Immagine several rackmount PCs running the sampler connected to a > 100Mbit LAN > which allows you to drive the sampler even via a Windows/Mac client > (Cubase,Logic) with > with a DMIDI output driver. > > Regarding the sampler->GUI protocol I think is is wise to use a > TCP socket for this since > it saves you from the hassles of handling lost packets and since on a > LAN TCP transmission errors > show up very rarely, I think the speed that can be achieved (for > GUI purposes) is more than > adequate and most of times the round-trip latency will be below 1msec. > > For real time messages like MIDI UDP is of course much better but I'd > like it to be somewhat > "safe". (eg it is not nice to hear a group of hanging notes due to a > lost packet). > > Can DMIDI currently deal with errors ? What protocol do you use ? RTP or > raw UDP ? > Not sure if the paradigm of "better late than never" fits into MIDI but > as said missing notes, > controller msgs etc can do much more damage than late events. > So I think we need a bit of redundancy when sending MIDI events. > In the case you are using raw UDP, how about simply sending two > identical packets for each > group of events and having the host detecting and discarding the > duplicates ? > As said before a lost packet on a non-broken LAN happens very seldom > (except in extremely > congested networks but that is not the typical case of a LAN used for > MIDI transmission). > Thus commonsense would say me that if we send two packets instead of one > the probabilty > that both don't come through is very very low. > ( in an uniform error distribution Pboth_packets_losts=Ppacket1_lost * > Ppacket2_lost) > > I think this is a very interesting topic and I'd like you to share your > thoughts with us. > > Regarding using CV I am not 100% sure about that, sure it simplifies > things a bit but I think > that in some cases we loose efficiency and flexibility. > > I'm currently designing an efficient RAM sampler object that can fit > within the modular and > recompilable sampler concept. (using time stamps for maximum precision > and efficiency). > I will post a diagram and a description in 1-2 days so that the list can > comment on and correct > eventual design mistakes. > > cheers, > Benno > > -- > http://linuxsampler.sourceforge.net > Building a professional grade software sampler for Linux. > Please help us designing and developing it. > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by: See the NEW Palm > Tungsten T handheld. Power & Color in a compact size! > http://ads.sourceforge.net/cgi-bin/redirect.pl?palm0001en > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Benno S. <be...@ga...> - 2002-11-06 11:05:10
|
(discussion about DMIDI and GUI/Sampler communication) Hi Phil, I took a brief look at DMIDI and it looks interesting. Of course it would be very handy to add DMIDI support to LinuxSampler. Immagine several rackmount PCs running the sampler connected to a 100Mbit LAN which allows you to drive the sampler even via a Windows/Mac client (Cubase,Logic) with with a DMIDI output driver. Regarding the sampler->GUI protocol I think is is wise to use a TCP socket for this since it saves you from the hassles of handling lost packets and since on a LAN TCP transmission errors show up very rarely, I think the speed that can be achieved (for GUI purposes) is more than adequate and most of times the round-trip latency will be below 1msec. For real time messages like MIDI UDP is of course much better but I'd like it to be somewhat "safe". (eg it is not nice to hear a group of hanging notes due to a lost packet). Can DMIDI currently deal with errors ? What protocol do you use ? RTP or raw UDP ? Not sure if the paradigm of "better late than never" fits into MIDI but as said missing notes, controller msgs etc can do much more damage than late events. So I think we need a bit of redundancy when sending MIDI events. In the case you are using raw UDP, how about simply sending two identical packets for each group of events and having the host detecting and discarding the duplicates ? As said before a lost packet on a non-broken LAN happens very seldom (except in extremely congested networks but that is not the typical case of a LAN used for MIDI transmission). Thus commonsense would say me that if we send two packets instead of one the probabilty that both don't come through is very very low. ( in an uniform error distribution Pboth_packets_losts=Ppacket1_lost * Ppacket2_lost) I think this is a very interesting topic and I'd like you to share your thoughts with us. Regarding using CV I am not 100% sure about that, sure it simplifies things a bit but I think that in some cases we loose efficiency and flexibility. I'm currently designing an efficient RAM sampler object that can fit within the modular and recompilable sampler concept. (using time stamps for maximum precision and efficiency). I will post a diagram and a description in 1-2 days so that the list can comment on and correct eventual design mistakes. cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
|
From: <chr...@ep...> - 2002-11-06 10:36:21
|
On Mon, Nov 04, 2002 at 10:10:21PM +0000, Steve Harris wrote: > In both cases, if linuxsampler is in-process of a host app (let's say a > sequencer) or if it's running in a seperate process there would be the > same context switch to another client. So the only benefit would be tha= t > there is no process context switch between the host application and the= > sampler, but still a thread context switch. Steve, does the cache > clearing also apply to thread switches? Depends on the kind of thread you're using. User level threads don't flush the cache on a context switch. About kernel level threads I'm not really sure, because there is a also a mode switch (user mode <-> kernel mode) whenever a context switch happens on this kind of threads. But I wouldn't put the sampler in another 'host' application's process anyway. ________________________________________ Online Fotoalben - jetzt kostenlos bei http://www.ePost.de |
|
From: Phil K. <phi...@el...> - 2002-11-06 08:05:27
|
Hi, Although it's probably slightly too early to discuss this in detail it may be useful to map out some basic functionality between the GUI and engine. Some ideas (please feel free to add) Sample select Loop start/endpoints ADSR Main volume Tuning Filter selection/cutoff/resonance Effect selection/level ALSA/JACK config ..... What protocol do we use to communicate between the two? UDP is a better choice than TCP due to it's speed. I presume that it will be used on a LAN so typical delay's should be under 2ms. Using DMIDI would allow MIDI software/hardware to interact with the engine and it's well on it's way to become an IEEE standard. Basic functionality can be obtained by mapping control change messages to functions. I added ALSA 0.9 and DMIDI code to SpiralSynth and it was really easy to do this and should fit in to the CV model that Steve Harris suggested. More complex data can be transmitted using a SYSEX like format. The downside to this is MIDI's low bit resolution, upside is almost anything can control it. Thoughts? Phil |
|
From: Josh G. <jg...@us...> - 2002-11-06 07:32:58
|
Went on a trip for a few days and come back to find my email box full of discussions about a patch loading library. What follows are details concerning my libInstPatch library and its design goals. If you haven't had a look at my API for libInstPatch (http://swami.sourceforge.net) here is my current plans for it. I am very curious how well this project could fit in with the LinuxSampler project (actually I'm also wondering if Swami could become a GUI for it, since I'm planning on fully objectifying the GUI as well, it is pretty much already, just not the interface). - Using GObject, a C based object system (GStreamer uses this). This gives us an object system with C the lowest common denominator programming language. GObject also has an easy python binding code generator that is part of the pygtk package. - Not attempt to make a unified patch format (sounds like others agree) - Take advantage of GObject's typed property system for setting parameters on patches, this provides for a bit of unified API between patch formats, example of setting parameters on a SoundFont preset in C: g_object_set (preset, "name", "Preset Foo", "bank", 0, "psetnum", 4, NULL); - Patch formats are broken out into individual GObjects (IPatchSFont, IPatchSFPreset, IPatchSFInst, IPatchSFZone and IPatchSFSample for SoundFont banks for example) and organized in a parent/child tree structure where appropriate. - Multi-threaded patch objects - A sample store object that provides for pluggable methods of storing sample data (RAM, swap file, etc) I believe this design will provide for a rather flexible patch file library and other formats can be easily added to it. Some things to think about: - My new libInstPatch actually needs a bit of testing and debugging as I just recently completed the API and have not brought the new development version of Swami up to date to actually use it. - Loading/saving API is not flexible enough (hasn't been re-written yet for libInstPatch which is now no longer SoundFont centric) - How well would the object system libInstPatch uses fit with a real time synthesizer? What kind of system could be used to associate synthesizer time critical data with patch objects? - Multi-threaded objects: while it makes server/multiple client architectures possible it also adds excess locking requirements etc. For example: all object lists must be locked before iterating over them in libInstPatch, if not using the user iterator routines which make a copy of object lists. - I decided not to deal with the converting of audio data with libInstPatch although I'm sure this will be necessary as more formats are added. Perhaps a sample library like libsndfile could be used for such things, I know the author is interested in this stuff. My library has pretty much been designed around the idea of the most flexibility in the patch file realm and act like a patch server that can have multiple clients editing the same patches (think distributed patch editing sessions, between programs or computers). I have not really thought about the real time realm required by soft synths. It may be that my design cannot incorporate whats required by LinuxSampler, but I would really like it to. This project sounds really exciting. I'm just hoping I can keep up with the massive amount of email these discussions involve (and actually get some programming done at the same time :) Cheers. Josh Green |
|
From: Steve H. <S.W...@ec...> - 2002-11-05 22:39:52
|
On Tue, Nov 05, 2002 at 09:04:04 +0100, Matthias Weiss wrote: > The situation of a context switch before the sampler finished its work > for the given jack cycle depends whether linuxsampler runs in > SCHED_FIFO mode and if there is a higher priority process preempting the > sampler process. This shouldn't happen within one jack graph, but it > could happen if there are parallel jack graphs. Its still a needless overhead. > In both cases, if linuxsampler is in-process of a host app (let's say a > sequencer) or if it's running in a seperate process there would be the > same context switch to another client. So the only benefit would be that > there is no process context switch between the host application and the > sampler, but still a thread context switch. Steve, does the cache > clearing also apply to thread switches? I dont know if it applies to threads as well. - Steve |
|
From: Matthias W. <mat...@in...> - 2002-11-05 20:07:22
|
On Mon, Nov 04, 2002 at 10:10:21PM +0000, Steve Harris wrote: > > Well this means we have to provide GUI implementations for every graphic > > toolkit that is used by the available sequencers. > > If it's right that processes and threads are handled very similar in the > > Linux kernel there should be not alot of a performance difference > > between in-process and out-of-process model, anyone knows more about > > that? > > One idea was that linuxsampler UI's would communicate with the main engine > over a (non X) socket of some kind. I see, this complicates the implementation but would have the benefit that a crashing GUI wouldn't ruin the recording. It also forces a clean seperation off GUI code and engine code. > > The problem with out-of-context is the the cache has to be cleared and > refilled (well the part touched, and I imagine a big sample set would use > a lot of cache) and the context switch time is small, but non-zero. As you > pointied out we dont have very long to write out 64 samples for every > active sample of every active note. The situation of a context switch before the sampler finished its work for the given jack cycle depends whether linuxsampler runs in SCHED_FIFO mode and if there is a higher priority process preempting the sampler process. This shouldn't happen within one jack graph, but it could happen if there are parallel jack graphs. In both cases, if linuxsampler is in-process of a host app (let's say a sequencer) or if it's running in a seperate process there would be the same context switch to another client. So the only benefit would be that there is no process context switch between the host application and the sampler, but still a thread context switch. Steve, does the cache clearing also apply to thread switches? > > > > regarding the AKAI samples: Steve says akai samplers were quite limited > > > in terms or RAM availabilty (32-64MB) > > > and since akai samplers allow some funny stuff like modulating the loop > > > points I was wondering what you thing about not > > > using disk streaming for this format. > > > > Or caching enough audio data that covers the modulation range which > > might impact RAM usage. > > The 3000 series were limited to 32meg, generaly the samples were small, > but in either case the point is that the optimal implementation isn't disk > streamed. Its just an example though, dont get hung up on AKAIs. Hehe, I'm not hung up on AKAI's in anyway ;-), rather stick with software samplers :)) . matthias |
|
From: Matthias W. <mat...@in...> - 2002-11-05 19:21:54
|
On Mon, Nov 04, 2002 at 10:26:32PM +0000, Steve Harris wrote: > > In order to provide the whole features that a sample format provides, we > > have to represent the parameters in linuxsampler. But that means we > > allready have a "grand unified sample" system. > > We dont have to do that, we can have format specific engines, the question > is whether its a good idea or not. Which means reimplementing similar code for every engine as you pointed out before. Maybe its possible to extract some kind of a greatest common divisor that builds the skeleton for every engine. > > Benno's plan to use dynamic compilation units would make the engines quick > to construct, they won't be as fast as ticghtly hand coded engines, but it > may be worth it for the RAD features. Hm, I recognized, that it's not obvious to me what parts should be dynamically compiled. Benno, could you explain in more detail? matthias |
|
From: Steve H. <S.W...@ec...> - 2002-11-05 18:15:59
|
On Tue, Nov 05, 2002 at 07:20:25 +0100, Cadaceus wrote: > P.S.: Anyone thought of something like VST-support for linuxsampler (the > diagram didn't look like it)? I don't know if rosegarden or else supports it > (or a linux-"version" of VST) , but it would be nice to control linuxsampler > from your favorite sequencer (that's a thing most gigastudio-users still get > pissed by because VST-support is still missing). Just an idea for the future > (certainly not for the beginning)... That would be LADSPA. - Steve |
|
From: Cadaceus <cad...@Po...> - 2002-11-05 18:06:38
|
Hi everyone, thought I quickly introduce myself. My name is Alex Klein, I'm studying computer sience (besides being a wanna-be-composer) and since I'm gonna have lot's of spare time in next semester (at least I hope so : ) I thought I try to help a bit with linuxsampler. Quite new to Linux, I'm recently in the process of "audio-programming-research" or bluntly said I don't have a clou about it know and hope it's different in a month or two... : ) So I probably won't be of much help in the planning-phase but I can offer some c++ experience and "a musician's/gigastudio-user's view" (so at least you got a beta tester... : )... Fields of interest? Well, pretty much the whole thing (most of all the audiothread -- envelope, looping, .. -- but I don't have much experience in that field) except sample importing... so if you got something to do just try me but keep in mind I'm a musician and a programmer, unfortunately -- up to now -- not a music-app programmer... Well, going over to reading on ALSA, LADSPA and all that stuff... : ) Cheers Alex P.S.: Anyone thought of something like VST-support for linuxsampler (the diagram didn't look like it)? I don't know if rosegarden or else supports it (or a linux-"version" of VST) , but it would be nice to control linuxsampler from your favorite sequencer (that's a thing most gigastudio-users still get pissed by because VST-support is still missing). Just an idea for the future (certainly not for the beginning)... |
|
From: Steve H. <S.W...@ec...> - 2002-11-05 08:32:00
|
On Tue, Nov 05, 2002 at 02:42:25 -0300, Juan Linietsky wrote: > > The problem with out-of-context is the the cache has to be cleared and > > refilled (well the part touched, and I imagine a big sample set would use > > a lot of cache) and the context switch time is small, but non-zero. As you > > pointied out we dont have very long to write out 64 samples for every > > active sample of every active note. > > > > Sorry, I didnt get the parent mail to this, could you please explain me where > does this out-of-context issue comes from? It was a braino, I meant out-of-process. The previous person was discussin why it was neccesary to go in process for linuxsampler. I think the short answer is that it isn't neccesary, but its damn hard as it is any we dont need anything to make it any harder. - Steve |
|
From: Juan L. <co...@re...> - 2002-11-05 08:16:24
|
Ok, just did more work on the codebase (basic code skeleton). Just a bit is missing to get it to work, but i'd rather wait until tomorrow so i can get it well checked with gcc 3.2, newer version of jack,etc. Let's hope tomorrow i have something working! cheers Juan Linietsky |
|
From: Juan L. <co...@re...> - 2002-11-05 05:40:23
|
On Mon, 4 Nov 2002 22:10:21 +0000 Steve Harris <S.W...@ec...> wrote: > On Mon, Nov 04, 2002 at 10:15:56 +0100, Matthias Weiss wrote: > > > Regarding JACK we will probably need to use the in-process model (which > > > is actually not used much AFAIK) > > > in order to achieve latencies at par with direct output so this needs > > > further research. > > > > Well this means we have to provide GUI implementations for every graphic > > toolkit that is used by the available sequencers. > > If it's right that processes and threads are handled very similar in the > > Linux kernel there should be not alot of a performance difference > > between in-process and out-of-process model, anyone knows more about > > that? > > One idea was that linuxsampler UI's would communicate with the main engine > over a (non X) socket of some kind. > > The problem with out-of-context is the the cache has to be cleared and > refilled (well the part touched, and I imagine a big sample set would use > a lot of cache) and the context switch time is small, but non-zero. As you > pointied out we dont have very long to write out 64 samples for every > active sample of every active note. > Sorry, I didnt get the parent mail to this, could you please explain me where does this out-of-context issue comes from? thanks! Juan Linietsky |
|
From: Juan L. <co...@re...> - 2002-11-05 03:35:22
|
On Mon, 4 Nov 2002 14:13:16 +0000 Steve Harris <S.W...@ec...> wrote: > On Tue, Nov 05, 2002 at 12:03:36 +1000, [3] wrote: > > >So, I think it is better to have seperate sub-engines that communicate > > >with the main engine at a high level (eg. to the sub-engine: "Here is a > > >bunch of event data ...", from the sub-engine: "I want 8 outputs", "here > > >is a lump of audio data ..."). > > > >The alternative would be to normalise all the sample formats into one, > > >grand unified sample format and just handle that (I believe that is how > > >gigasampler works?). > > Of course, the counter argument too all this is that writing a full > sampler engine for every format we want to support fully sucks, no-one > probably needs all that functionlaity anyway, and we should just write > translators ont a common, comprehensive format and live with the slight > conversion loss. <shrug> > > - Steve > I think i said this over IRC, but i'd like to say it again. Being the central part of the issue the "Voice". The abstration to me should be like this: (hope you have fixed font, in any other case, left item connects to last :) #Sample Library reading engine -> *Disk Streamer -> #Voice <- *Engine Manager <- #Engine |_________________________________________________________________________________^ * means common-to-all object # means specific implementation inheriting from a common base (through a polymorfic interface). So, I think the Engine/Voice processing/Library File reading should be implementation specific (giga/akai/etc), but it should communicate with the existing framework through common objects to make our life easier while programming. Remember, not everything is just reading and streaming, all the midi event handling, voice mixing/allocation, effect processing and buffer exporting must be common to all interfaces. This would end up in a framework for emulating existing samplers. I used this approach in legasynth (http://reduz.com.ar/legasynth) with a lot of success already, and it should make writing specific implementations of sampling engines a _lot_ easier. Juan Linietsky |
|
From: Benno S. <be...@ga...> - 2002-11-04 23:34:42
|
> > In order to provide the whole features that a sample format provides, we > > have to represent the parameters in linuxsampler. But that means we > > allready have a "grand unified sample" system. > > We dont have to do that, we can have format specific engines, the question > is whether its a good idea or not. > > Benno's plan to use dynamic compilation units would make the engines quick > to construct, they won't be as fast as ticghtly hand coded engines, but it > may be worth it for the RAD features. > > - Steve I think the speed penalty is low since you are probably using only pre-optimized macros plus you can alway take the generated source and optimize it manually. not a big deal. Steve as suspected there are people that agree with me that when loading AKAI samples in RAM you can easily end up burning 256MB of RAM which is a lot for not high-end PCs. Let's see how the discussion evolves ... AKAI experts your saying ? cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
|
From: Steve H. <S.W...@ec...> - 2002-11-04 22:26:37
|
On Mon, Nov 04, 2002 at 10:27:49 +0100, Matthias Weiss wrote: > > Of course, the counter argument too all this is that writing a full > > sampler engine for every format we want to support fully sucks, no-one > > probably needs all that functionlaity anyway, and we should just write > > translators ont a common, comprehensive format and live with the slight > > conversion loss. <shrug> > > In order to provide the whole features that a sample format provides, we > have to represent the parameters in linuxsampler. But that means we > allready have a "grand unified sample" system. We dont have to do that, we can have format specific engines, the question is whether its a good idea or not. Benno's plan to use dynamic compilation units would make the engines quick to construct, they won't be as fast as ticghtly hand coded engines, but it may be worth it for the RAD features. - Steve |
|
From: Steve H. <S.W...@ec...> - 2002-11-04 22:24:08
|
On Mon, Nov 04, 2002 at 10:23:13 +0100, Christian Schoenebeck wrote: > It's been a while since I created my last Akai programs, but AFAIK the S3000 > series (only regarding this start point) just differs between four > velocities. I think they called them zones and for each of these 4 zones you There is also start point varition, keyed off note velocity, the range is -9999 to +9999 or something like that, but there is no indication what the units are (as usual). I dont know how much it was used, I used it once or twice I think. It was good for pecussion. - Steve |
|
From: Steve H. <S.W...@ec...> - 2002-11-04 22:20:13
|
On Mon, Nov 04, 2002 at 07:55:50 -0000, Paul Kellett wrote: > > regarding the AKAI samples: Steve says akai samplers were quite limited > > in terms or RAM availabilty (32-64MB) and since akai samplers allow some > > funny stuff like modulating the loop points I was wondering what you > > thing about not using disk streaming for this format. > > How about the s5000/6000 series ? what is the maximum RAM configuration > > ? Do they allow nasty loop point modulation too ? > > I don't think Akai ever had loop point modulation? Except while editing > samples, anyway. No, old Akai's only have start point modulation (based on note velocity), I think EMUs might have loop point modulation, I've used them much less though. - Steve |
|
From: Steve H. <S.W...@ec...> - 2002-11-04 22:10:27
|
On Mon, Nov 04, 2002 at 10:15:56 +0100, Matthias Weiss wrote: > > Regarding JACK we will probably need to use the in-process model (which > > is actually not used much AFAIK) > > in order to achieve latencies at par with direct output so this needs > > further research. > > Well this means we have to provide GUI implementations for every graphic > toolkit that is used by the available sequencers. > If it's right that processes and threads are handled very similar in the > Linux kernel there should be not alot of a performance difference > between in-process and out-of-process model, anyone knows more about > that? One idea was that linuxsampler UI's would communicate with the main engine over a (non X) socket of some kind. The problem with out-of-context is the the cache has to be cleared and refilled (well the part touched, and I imagine a big sample set would use a lot of cache) and the context switch time is small, but non-zero. As you pointied out we dont have very long to write out 64 samples for every active sample of every active note. > > regarding the AKAI samples: Steve says akai samplers were quite limited > > in terms or RAM availabilty (32-64MB) > > and since akai samplers allow some funny stuff like modulating the loop > > points I was wondering what you thing about not > > using disk streaming for this format. > > Or caching enough audio data that covers the modulation range which > might impact RAM usage. The 3000 series were limited to 32meg, generaly the samples were small, but in either case the point is that the optimal implementation isn't disk streamed. Its just an example though, dont get hung up on AKAIs. - Steve |