You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Matthias W. <mat...@in...> - 2002-11-13 21:20:02
|
Hi Josh! First of all, very interesting idea! On Wed, Nov 13, 2002 at 12:35:26AM -0800, Josh Green wrote: > The way I'm currently thinking of things is that libInstPatch could act > as a patch "server" and would contain the loaded patch objects. Why is it necessary that those patch server run in their own process? Why not simply add this functionality to the nodes themself? > size/name or perhaps MD5 on the parameter data). In the case of the GUI, > I think it needs to have direct access to the patch objects (so anything > it is editing should be locally available, either being served or > synchronized to another machine). Hm, would that mean the GUI has to keep a copy of the state data? I think the state data (modulation parameters, etc.) should only be kept by the nodes, because those are the ones who need the data always available. It would look something like that: GUI1 <-->libInstPatch.so/LinuxSampler1 ^ ^ | | | +- Vintage Dreams SF2 (Master) | +- Lots of Foo.DLS (Slave) | +- Wide Load.gig | | +---->libInstPatch.so/LinuxSampler2 | ^ | | ... +-->GUI2 With this scheme, libInstPatch would be a shared library that handles (to the outside world) the peer communication as well as the communication with the GUI and controls the state changes to the inside world (sample engine paramters). The state data would have to be kept one time per node, no duplication in the patch server and GUI. > GUI and LinuxSampler would not necessarily be communicating with the > same libInstPatch server, they could be on separate synchronized > machines. What's the advantage of this? > Anyways, thats how I see it. Does this make sense? Cheers. To me the idea is great and makes a lot of sense ;-). matthias |
From: Steve H. <S.W...@ec...> - 2002-11-13 21:06:32
|
On Wed, Nov 13, 2002 at 09:46:42 +0100, Nicolas Justin wrote: > On Wednesday 13 November 2002 21:25, Nicolas Justin wrote: > > I can try to look at the code, and see if there is room for optimisations. > > But I'm very new to this project, and I think there is more experimented > > programmers than me on this list. > > Maybe you can look at libSIMD (http://libsimd.sf.net), these is a library > implementing simple maths functions with SIMD instructions. Thats interesting, there only appears to be 3dnow accelerations at the moment, but it could be useful once they get SSE done. - Steve |
From: Nicolas J. <nic...@fr...> - 2002-11-13 20:47:49
|
On Wednesday 13 November 2002 21:25, Nicolas Justin wrote: > I can try to look at the code, and see if there is room for optimisations. > But I'm very new to this project, and I think there is more experimented > programmers than me on this list. Maybe you can look at libSIMD (http://libsimd.sf.net), these is a library implementing simple maths functions with SIMD instructions. There is also a patch by Stéphane Marchesin implementing a MMX mixer and audio converter for SDL (http://www.libsdl.org), you can find it here: http://dea-dess-info.u-strasbg.fr/~marchesin/SDL_mmx.patch Just my 2 cents... -- Nicolas Justin - <nic...@fr...> |
From: Steve H. <S.W...@ec...> - 2002-11-13 20:34:57
|
On Wed, Nov 13, 2002 at 09:25:46 +0100, Nicolas Justin wrote: > I can try to look at the code, and see if there is room for optimisations. > But I'm very new to this project, and I think there is more experimented > programmers than me on this list. I would wait until we have finalised the inner loop, its likely to change a lot. - Steve |
From: Nicolas J. <nic...@fr...> - 2002-11-13 20:26:25
|
On Wednesday 13 November 2002 18:58, Steve Harris wrote: > > I heard the P4 heavily relies on optimal SSE2 optimizations in order to > > deliver maximum performance and it seems that both gcc and icc do not > > work optimally in this regard. > > SSE, not SSE2 IIRC. SSE2 is still only 128bits wide, and uses 64bit floats > so it can only go two-way. Gcc and even icc are not really good at code vectorisation. IMHA it is a better idea to parallel the code manually using the SSE instructions, you will get better performances. I can try to look at the code, and see if there is room for optimisations. But I'm very new to this project, and I think there is more experimented programmers than me on this list. -- Nicolas Justin - <nic...@fr...> |
From: Steve H. <S.W...@ec...> - 2002-11-13 17:58:47
|
On Wed, Nov 13, 2002 at 05:47:14 +0100, Benno Senoner wrote: > Since we will probably go all floating point (because high precision, > head room and flexibility over integer) you need to be careful to > optimize the code because as we all know x86 FPUs do suck a bit. Right, but we can use SSE in P4's (and maybe P3's if its faster) with gcc3. This just needs the flags I posted to l-a-d, no code changes. > Steve H: I have added stereo mixing with volume support to better > reflect the behaviour of a real sampler with pan support, fortunately > the performance drop from the mono version is minimal thanks to caching. Excellent. I though we were wasting a lot of cycles waiting for the RAM in the mono case. [events and CV] > One might say this is a waste of CPU but as Steve wrote in an earlier > posting on this list, the rate of CV values is usually much lower (1/4 - > 1/16) than the samplerate. This means that even if the event stream is > very dense the added overhead is minimal. > I think the best way to find a good comprimise between flexibility > and speed is to try out several methods and pick those with the best > price/performance ratio. OK, well events are more LADSPA like, which is convienient I suppose, this is really an internal enging thoing though, so we dont have to decide upfront. > Are FXes in soft samplers/synths usually stereo or mono ? > Since we are using recompilation this can be made flexible but I have > noticed that FX send channels can chew up quite some CPU. > see this: I think on older samplers they are stereo return (to the main mix outs), newer samplers have many more outputs, so I dont know how they handle it. The number of send channels is equal to the number of channels in the sample. > P4: > samples/sec = 12528321.035306 mono voices at 44.1kHz = 284.088912 > efficency: 144.401951 CPU cycles/sample > > Athlon: > samples/sec = 14626412.219113 mono voices at 44.1kHz = 331.664676 > efficency: 95.721219 CPU cycles/sample > > > This with both gcc3.2 and 2.96. The P4 seem to suck quite. P4's really dont like branches from what I have heard (very long pipelines). The Athlon is much shallower. What RAM systems did the two machines have? > Using the Intel C / gcc compilers with SSE optimizations did not > provide any speedup, in some cases the performance was even worse. Even on P4? > I heard the P4 heavily relies on optimal SSE2 optimizations in order to > deliver maximum performance and it seems that both gcc and icc do not > work optimally in this regard. SSE, not SSE2 IIRC. SSE2 is still only 128bits wide, and uses 64bit floats so it can only go two-way. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-13 17:44:15
|
On Wed, Nov 13, 2002 at 11:35:58 -0600, Richard A. Smith wrote: > On 13 Nov 2002 17:47:14 +0100, Benno Senoner wrote: > > > The strange thing is that on most modern x86 CPUs using doubles is as > > fast/faster than floats. That's good :-) > > > > Perhaps thats due to data bus size and the FPU size. Modern x86s > FPUs are 80-bit IIRC. The data bus is 64 bits wide so fetching a > double or float from memory take the same ammount of cycles. The 80bit format is mainly internal, its used to maintain IEEE compatibility in the 387 as I understand it. SSE does not use it. The problem with using doubles (64bit) or long doubles (80bit) in your code is the cache effects. You still have the same number of fp stack registers though. See Tim G.'s early attemps with ladspa filters, it makes no difference when thats the only thing running, but when you add more processes it becomes much slower. If using doubles was actually faster you may have missed the trailing f off a constant or used, eg. sin() instead of sinf(). - Steve |
From: Richard A. S. <rs...@bi...> - 2002-11-13 17:36:11
|
On 13 Nov 2002 17:47:14 +0100, Benno Senoner wrote: > The strange thing is that on most modern x86 CPUs using doubles is as > fast/faster than floats. That's good :-) > Perhaps thats due to data bus size and the FPU size. Modern x86s FPUs are 80-bit IIRC. The data bus is 64 bits wide so fetching a double or float from memory take the same ammount of cycles. Perhaps going from a 32-bit float to the 80 bit FPU format involves a cast that uses more cycles than a 64 bit double to 80-bit. -- Richard A. Smith Bitworks, Inc. rs...@bi... 479.846.5777 x104 Sr. Design Engineer http://www.bitworks.com |
From: Benno S. <be...@ga...> - 2002-11-13 16:35:37
|
Hi, during the last couple of days I performed benchmarks in order to analyze the speed of resampling/mixing routines which will make up the core of the RAM sampler module. Since we will probably go all floating point (because high precision, head room and flexibility over integer) you need to be careful to optimize the code because as we all know x86 FPUs do suck a bit. I performed benchmarks on a celeron,p4 and athlon and must admit that the athlon will make up for a damn good sampler box since it seems to have a speedy fpu. The difference is notable especially when using cubic interpolaton: an athlon 1400 matches the performance of a 1.8Ghz P4. Anyway if you want to play a bit with my benchmark (it's only a quick hack to test a few routines) just download it from http://www.linuxdj.com/benno/rspeed4.tgz Steve H: I have added stereo mixing with volume support to better reflect the behaviour of a real sampler with pan support, fortunately the performance drop from the mono version is minimal thanks to caching. The strange thing is that on most modern x86 CPUs using doubles is as fast/faster than floats. That's good :-) Regarding the RAM sampler module I proposed earlier: I studied some event based stuff David Olofson proposed long time ago and since Steve H. said "we will probably need both event based stuff and control values but the control value frequency does not need that high", I made a few calculations and it seems to pay of to implement the control values as fine grained events. One might say this is a waste of CPU but as Steve wrote in an earlier posting on this list, the rate of CV values is usually much lower (1/4 - 1/16) than the samplerate. This means that even if the event stream is very dense the added overhead is minimal. I think the best way to find a good comprimise between flexibility and speed is to try out several methods and pick those with the best price/performance ratio. I have an important question regarding the effect sends: (since I am not an expert here) Are FXes in soft samplers/synths usually stereo or mono ? Since we are using recompilation this can be made flexible but I have noticed that FX send channels can chew up quite some CPU. see this: data of my celeron 366: cubic interpolation with looping, mono voices but output is stereo (with pan) no fx sends: samples/sec = 4879341.532237 mono voices at 44.1kHz = 110.642665 efficency: 74.957245 CPU cycles/sample one FX stereo send: samples/sec = 4104676.981704 mono voices at 44.1kHz = 93.076576 efficency: 89.103723 CPU cycles/sample two FX stereo sends: samples/sec = 3508911.444682 mono voices at 44.1kHz = 79.567153 efficency: 104.232326 CPU cycles/sample The CPU power for two mono sends is about the same for one single stereo send so I was just wondering which way we should go initially. (mono I guess ?). The innermost mixing loop with 2 stereo FX sends looks like this: sample_val=CUBIC_INTERPOLATOR; output_sum_left[u] += volume_left * sample_val; output_sum_right[u] += volume_right * sample_val; effect_sum_left[u] += fx_volume_left * sample_val; effect_sum_right[u] += fx_volume_right * sample_val; effect2_sum_left[u] += fx2_volume_left * sample_val; effect2_sum_right[u] += fx2_volume_right * sample_val; makes sense ? (output_sum_left/right is the dry component , effect_sum and effect2_sum the FX sends) Some other numbers I got from P4 1.8Ghz vs Athlon 1400 cubic,looping and 2 stereo FX sends: P4: samples/sec = 12528321.035306 mono voices at 44.1kHz = 284.088912 efficency: 144.401951 CPU cycles/sample Athlon: samples/sec = 14626412.219113 mono voices at 44.1kHz = 331.664676 efficency: 95.721219 CPU cycles/sample This with both gcc3.2 and 2.96. The P4 seem to suck quite. Using the Intel C / gcc compilers with SSE optimizations did not provide any speedup, in some cases the performance was even worse. I heard the P4 heavily relies on optimal SSE2 optimizations in order to deliver maximum performance and it seems that both gcc and icc do not work optimally in this regard. (if I get my hands on a Visual C++ compiler on a P4 box I will try to run it on that box to see what the performance looks like). Let me know your thoughts about all the issues I raised in this (boring) mail :-) cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Josh G. <jg...@us...> - 2002-11-13 08:34:44
|
I have been thinking about this issue for some time now. Its the primary reason why I started making libInstPatch multi-threaded. My dream is to have shared multi-peer patch editing (just add a touch of streaming MIDI and you have yourself a Jam session). SwamiJam is the name I have given this particular application of Swami. This same idea could be applied to the GUI-->LinuxSampler problem, I believe. Some thoughts on GUI(Swami?)/libInstPatch/LinuxSampler: The way I'm currently thinking of things is that libInstPatch could act as a patch "server" and would contain the loaded patch objects. Other "servers" could be set up to synchronize to patch objects residing on different servers. These "servers" could take advantage of locally stored patch files (if someone already has the patch that another user is using, use it - ensuring they are identical could be done via simple size/name or perhaps MD5 on the parameter data). In the case of the GUI, I think it needs to have direct access to the patch objects (so anything it is editing should be locally available, either being served or synchronized to another machine). GUI <--> libInstPatch server --> LinuxSampler | | +- Vintage Dreams SF2 (Master) +- Lots of Foo.DLS (Slave) +- Wide Load.gig GUI and LinuxSampler would not necessarily be communicating with the same libInstPatch server, they could be on separate synchronized machines. A synchronization protocol then needs to be implemented. It would look something like so (sudo operations): ADD <object-data> REMOVE <object-id> CHANGE <object-id> <property> <value> Sample data should probably be handled specially to allow multiple segments to be sent (rather than all at once). libFLAC could be used for loss-less compression of sample data. The question then becomes how LinuxSampler will talk to its local patch server. LinuxSampler will only really care about presets that are currently active (i.e. selected on a MIDI channel). I think it would be too much of a performance bottleneck for LinuxSampler to query the patch server directly (since the server needs to take into account multiple peer access, and therefore locking of objects). Each synth primitive (envelope, LFO, filter, etc) would have its own internal state data as well as a copy of the parameters from the object system. Updates to the object system could be queued and synchronized to LinuxSampler local parameters when they are not being accessed by the synth. Anyways, thats how I see it. Does this make sense? Cheers. Josh Green |
From: Matthias W. <mat...@in...> - 2002-11-12 20:16:40
|
On Mon, Nov 11, 2002 at 11:53:02PM +0000, Steve Harris wrote: > On Mon, Nov 11, 2002 at 10:20:06 +0100, Matthias Weiss wrote: > > I think linuxsampler should also have the ability to create a sample > > instrument/set. Therefore it will be necessary to edit wave files, set > > loop points etc. Now the first problem that occures to me: how do I > > generate the waveview of the sample data when the samples are on one > > machine and the GUI on the other? Should I pregenerate the sample view > > data on the plugin machine and send it to the GUI machine? And if I edit > > the sample, should the edit commands be send over the net to the plugin > > machine, the plugin calculates the result, obtains the new sample view > > datas and sends them back? > > I would say that you only send graphical inforamtion to the GUI, and it > sneds instructions back to the engine, which executes them. That's what I called "sample view data". > The ammount of data in a visible lump of waveform is acutally pretty low > if you think about it, you cant see more than a couple of thousand samples > at once. I agree, but it seems to me that the effort to create a remote GUI is considerable. Running it remote via network might also have an impact on the latency side because of the interrupts generated by the NIC, of course when running it locally, this wouldn't be a problem. Further, as I outlined in my previous mail, it propably won't be possible to integrate an existing wave file editor in the sampler app which I'd consider a true drawback. matthias |
From: Steve H. <S.W...@ec...> - 2002-11-11 23:53:07
|
On Mon, Nov 11, 2002 at 10:20:06 +0100, Matthias Weiss wrote: > I think linuxsampler should also have the ability to create a sample > instrument/set. Therefore it will be necessary to edit wave files, set > loop points etc. Now the first problem that occures to me: how do I > generate the waveview of the sample data when the samples are on one > machine and the GUI on the other? Should I pregenerate the sample view > data on the plugin machine and send it to the GUI machine? And if I edit > the sample, should the edit commands be send over the net to the plugin > machine, the plugin calculates the result, obtains the new sample view > datas and sends them back? I would say that you only send graphical inforamtion to the GUI, and it sneds instructions back to the engine, which executes them. This will reduce the bandwidth over the pipe, and reduce the locking problems etc. The ammount of data in a visible lump of waveform is acutally pretty low if you think about it, you cant see more than a couple of thousand samples at once. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-11 23:34:50
|
On Mon, Nov 11, 2002 at 09:52:58 +0100, Matthias Weiss wrote: > > The difference is that it allows you to (possibly) get a bit lower > > latency. With out-of-process on my current system I can get down to 128 > > sample blocks, with in process and changing my filesystem I could get down > > to 64. Hoever I generaly run at 256, as I dont really care about latency > > that much, and I can get more processing done at 256. > > Whith what applications do you get this result? all the ones I've tested (ardour, eca*, meterbridge, freqtweak, pd), the 64 figure is hypothetical, I've not run this box at that speed. > Paul Davis speculated on the lad list that a simple client might just > have the same latency as an in-process client - as said, I'd like to > test it out. Yes iut should, it just varies the worst case, if the worst case isn't bad enough to make you fail to complete in time it wont make any difference whether its IP or OOP. The latency thing is basicly boolean, if you complete within the ~3ms you're OK, if you dont you're not. Simple :) - Steve |
From: Matthias W. <mat...@in...> - 2002-11-11 21:23:43
|
Hi all! I'd like to know your opinion about some thoughts regarding the GUI. As mentioned by Benno and Steve, when linuxsampler will run as a plugin of jackd, it'll be necessary to use some sort of communication mechanism between the plugin and the GUI. While the stability plus and the code cleanness due to the forced separation of engine code/GUI code are very attractive, there are also some difficulties connected with it. I think linuxsampler should also have the ability to create a sample instrument/set. Therefore it will be necessary to edit wave files, set loop points etc. Now the first problem that occures to me: how do I generate the waveview of the sample data when the samples are on one machine and the GUI on the other? Should I pregenerate the sample view data on the plugin machine and send it to the GUI machine? And if I edit the sample, should the edit commands be send over the net to the plugin machine, the plugin calculates the result, obtains the new sample view datas and sends them back? This seems to get really complicated. It also makes an idea of me impossible to imbed an existing wave editor as a component (e.g. as a bonobo component) into linuxsampler. In that case we would not have to reinvent the weel and write the XXX's wave editor but use an existing one (e.g. snd). But AFAIK the existing ones have no capabilities of remote control. matthias |
From: Matthias W. <mat...@in...> - 2002-11-11 20:56:34
|
On Sun, Nov 10, 2002 at 10:56:49PM +0000, Steve Harris wrote: > > My concern is that jackd is a "single point of error", if it crashes all > > other jack clients won't run. > > Thats why it's /really/ stable :) I guess I know what you're saying ;-), though my thought was on inherent stability. > > > I'd not give up the stability advantage of an out-of-process jack client > > for some 1/10 msec's. On the other hand if the difference in latency is > > a factor 2 or more I think it's worth the price. A comparsion of both variants > > would be enlightening. I'd like to write some test code but I won't have > > time for that before december. > > The difference is that it allows you to (possibly) get a bit lower > latency. With out-of-process on my current system I can get down to 128 > sample blocks, with in process and changing my filesystem I could get down > to 64. Hoever I generaly run at 256, as I dont really care about latency > that much, and I can get more processing done at 256. Whith what applications do you get this result? Paul Davis speculated on the lad list that a simple client might just have the same latency as an in-process client - as said, I'd like to test it out. matthias |
From: Steve H. <S.W...@ec...> - 2002-11-11 08:58:42
|
On Sun, Nov 10, 2002 at 11:14:53 -0800, Josh Green wrote: > On Sun, 2002-11-10 at 14:53, Steve Harris wrote: > > On Sun, Nov 10, 2002 at 11:34:00 -0800, Josh Green wrote: > > > Jack depends on glib (although I don't think it requires 2.0). > > > > Actually jack doesn't depend on glib anymore, it was causing problems with > > other apps that depeneded on glib. > > What kind of problems? As in 1.2.x/2.0 version conflicts? I'd like to > make sure I don't run into the same issues with my libraries and the > like. Cheers. I think so. I all came out when we tried to make it build against glib 1.2 or 2.0, it turned out to be complicated and we only needled lists anyway. I think that if you link against a library both you and it have to be using the same major glib ver., you should chech though, I wasn;t following hte discussion very closely. - Steve |
From: Josh G. <jg...@us...> - 2002-11-11 07:13:39
|
On Sun, 2002-11-10 at 14:53, Steve Harris wrote: > On Sun, Nov 10, 2002 at 11:34:00 -0800, Josh Green wrote: > > Jack depends on glib (although I don't think it requires 2.0). > > Actually jack doesn't depend on glib anymore, it was causing problems with > other apps that depeneded on glib. > > - Steve > What kind of problems? As in 1.2.x/2.0 version conflicts? I'd like to make sure I don't run into the same issues with my libraries and the like. Cheers. Josh Green |
From: Steve H. <S.W...@ec...> - 2002-11-10 22:56:53
|
On Sun, Nov 10, 2002 at 09:32:11 +0100, Matthias Weiss wrote: > > Yes, that's the price we have to pay but crashing is not an option. > > My concern is that jackd is a "single point of error", if it crashes all > other jack clients won't run. Thats why it's /really/ stable :) > I'd not give up the stability advantage of an out-of-process jack client > for some 1/10 msec's. On the other hand if the difference in latency is > a factor 2 or more I think it's worth the price. A comparsion of both variants > would be enlightening. I'd like to write some test code but I won't have > time for that before december. The difference is that it allows you to (possibly) get a bit lower latency. With out-of-process on my current system I can get down to 128 sample blocks, with in process and changing my filesystem I could get down to 64. Hoever I generaly run at 256, as I dont really care about latency that much, and I can get more processing done at 256. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-10 22:54:04
|
On Sun, Nov 10, 2002 at 11:34:00 -0800, Josh Green wrote: > Jack depends on glib (although I don't think it requires 2.0). Actually jack doesn't depend on glib anymore, it was causing problems with other apps that depeneded on glib. - Steve |
From: Matthias W. <mat...@in...> - 2002-11-10 20:35:47
|
On Fri, Nov 08, 2002 at 11:53:15PM +0100, Benno Senoner wrote: > On Fri, 2002-11-08 at 22:57, Matthias Weiss wrote: > >> > > Having jackd loading plugins, that would also mean if > > a plugin crashes, it takes with it the daemon, right? > > Yes, that's the price we have to pay but crashing is not an option. My concern is that jackd is a "single point of error", if it crashes all other jack clients won't run. > Even when only the sampler module goes down it will still break your > work. I prefer achieving rock solid 2.1 msec latency with the sampler > than having the soundserver not crashing in the case of a plugin > failure in exchange of giving up precious msecs of real time response. I'd not give up the stability advantage of an out-of-process jack client for some 1/10 msec's. On the other hand if the difference in latency is a factor 2 or more I think it's worth the price. A comparsion of both variants would be enlightening. I'd like to write some test code but I won't have time for that before december. matthias |
From: Benno S. <be...@ga...> - 2002-11-10 20:19:53
|
On Sun, 2002-11-10 at 20:34, Josh Green wrote: > > > Of course swami needs a powerful playback engine in order to provide > > what you hear is what you get feel. > > This means that ideally the instrument's editor and playback engine > > should go hand in hand so that the editor can match the engine's > > capabilities and viceversa. > > > > I have been designing Swami with exactly this type of modularity in > mind. I'm looking forward to working with the LinuxSampler project :) nice to have you on board ! Well basically I think your idea is very good but since I am not an expert in terms of "live editing of samples" I have no clear idea what the right solution looks like. One of our problems is that some samples reside in ram and some are streamed from disk with some caching. This means that the editor must be aware about these issues. These are dependent on the sample and engine format thus it is probably wise to keep that kind of code (without duplicating it at editor level) within the sampler. As Juan L. suggested,when providing a GUI for the sampler it is probably the best to control it through a TCP socket so that you can separate the engine from the front end. (eg controlling the sampler remotely on a separate machine , even a windoze box that runs a ported gtk/qt/java GUI :-) ). So if you were asking me I would apply the same paradigm to the instrument editor too. By serializing the access via a socket you do not risk race conditions and the thread that handles the parameter / instrument manipulation can be optimized to not interfere with the real time engine. I think remote editing will be advantageous for the future because it could be that the studio professional might have an "audio processing farm" which is networked with the front end machine which can control each engine in real time. What do you think ? Josh can you briefly describe what a general instrument editor must provide. (since I am not familiar with what capabilities an editor has to provide ) (eg, manipulating the parameters, looping, assign samples to keymaps etc) Keep in mind that we are going to work with both in-ram and on-disk samples possibly very very large libraries. (hundreds of MB / several GB). Plus there is the fact that we want a truly modular sampler where you can wire together basic audio building blocks which represent the final instrument. That means some kind of loader module (possibily save module too) needs to be written in correspondence of each engine. This may sound like sci-fi or some but it isn't and we plan to start with very basic stuff (like a simple RAM sampler with only basic modulation stuff). I think it will pay off to go that way because once the concept is implemented very cool things can be done and I think that stuff like building an efficient AKAI,GIG,SF2 etc sampler will advance very quickly. Thoughts ? Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Josh G. <jg...@us...> - 2002-11-10 19:32:47
|
On Thu, 2002-11-07 at 16:42, Benno Senoner wrote: > On Fri, 2002-11-08 at 01:14, Josh Green wrote: > > Hi Josh, > I do agree on all your points. > Indeed Fluid could provide a solid base for SoundFont playback and it > would be cool if we could collaborate with the Fluid developers to > produce something that becomes really good and powerful. > > Regarding an instrument editor, we strongly need such a beast and you > are probably the most expert in that field. > I'm grateful that you collaborate on the project since without > a good editor it will be hard to create new patches without resorting > to windows software. > I hope that swami becomes really flexible and that in near future it > will let you build and edit DLS2 , GIG and other instrument types. > > Of course swami needs a powerful playback engine in order to provide > what you hear is what you get feel. > This means that ideally the instrument's editor and playback engine > should go hand in hand so that the editor can match the engine's > capabilities and viceversa. > I have been designing Swami with exactly this type of modularity in mind. I'm looking forward to working with the LinuxSampler project :) If both were based on the same patch manipulation library (such as libInstPatch) this might be easily accomplished. I think the issue of flexible multi-threaded patch objects versus real time access of parameters is going to need some thought. Perhaps a parameter cache could be used that the synth engine can access directly and gets synchronized to the patch object system. There would most likely be a lot of internal data that the synth itself needs for primitive synth constructs (like envelops, LFOs, etc) which are really only needed for active voices and could be allocated from a memory pool (no malloc). Do you think it would be okay for LinuxSampler to depend on libInstPatch? This would introduce the following dependencies: libInstPatch GObject 2.0/glib 2.0 Jack depends on glib (although I don't think it requires 2.0). glib/gobject are required by GTK+ and are available on win32 platforms as well as OS-X. glib has a lot of other neat features that would probably benefit the development of LinuxSampler (threading; data types like linked lists, trees, hashes; memory pool functions, etc). The alternative route is to do something like what FluidSynth does. It is currently loaded into Swami as a plugin and there is an SFLoader API in Fluid that allows me to load patch information and data on demand as well as update various generator parameters in real time. This would seem to complicate matters though in the case of multiple patch formats, etc. I think using the same patch library would make the most sense. > cheers, > Benno > Cheers. Josh Green P.S. If anyone is interested in helping out with Swami (http://swami.sourceforge.net) directly, I could really use more developers. The code base is C using GObject and the GUI uses GTK+, the API docs for both those libraries can be found at http://www.gtk.org. |
From: Benno S. <be...@ga...> - 2002-11-08 22:42:15
|
On Fri, 2002-11-08 at 22:57, Matthias Weiss wrote: >> > Having jackd loading plugins, that would also mean if > a plugin crashes, it takes with it the daemon, right? Yes, that's the price we have to pay but crashing is not an option. Even when only the sampler module goes down it will still break your work. I prefer achieving rock solid 2.1 msec latency with the sampler than having the soundserver not crashing in the case of a plugin failure in exchange of giving up precious msecs of real time response. But given that the programming model for in-process and out-of-process is the same, you can just compile the version you like better. (as said the fewer processes that run the bigger the chance that there will not be problems during high load situations). Benno http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Matthias W. <mat...@in...> - 2002-11-08 22:00:38
|
On Thu, Nov 07, 2002 at 07:32:16PM +0100, Benno Senoner wrote: > > > Regarding the JACK issues that Matthias W raised: > > > I'm no JACK expert but I hope that JACK supports running a JACK client > > > directly in it's own process space as it were a plugin. > > > This would save unnecessary context switches since there would be only > > > one SCHED_FIFO process runnnig. > > > (Do the JACK ALSA I/O modules work that way ? ) > > > > Yes, but there is currently no mechanism for loading an in process client > > once the engine has started, however that is merely because the function > > hasn't been written. Both the ALSA and Solaris i/o clients are in process, > > but they are loaded when the engine starts up. > > Ok, at least the engine is designed to work that way (so I guess for > maximum performance some extensions for JACK will be required but I > assume that will not be a big problem) Having jackd loading plugins, that would also mean if a plugin crashes, it takes with it the daemon, right? matthias |
From: Benno S. <be...@ga...> - 2002-11-08 17:45:25
|
On Fri, 2002-11-08 at 13:12, Phil Kerr wrote: > Although this speed is the official spec I've found that ALSA, and a > number of MIDI cards, do not seem to throttle data back to the spec and > will pass data though the interface at reception speed. Although this > is a wider problem for me regarding DMIDI and hardware it's potentially > a good thing when we are dealing with software based D/MIDI applications > (10 Gbit MIDI :). yes, of course that's why I prefer the PC sequencer -> internal MIDI -> PC software synth/sampler solution over using external gear. Timing will be very tigh and chords do not suffer from smearing. In fact in some extreme test cases with my proof-of-concept code where flooded it with aligned note-on events there were CPU overloads due to the voice initialization code eating away a few cycles. Probably we will need to limit the note-on's per fragment in order to level out these CPU load peaks, especially when using very small fragment sizes (32-64 samples) with bigger fragment sizes there is plenty of time for handling the note events. > > So regarding MIDI timing I get the impression that PC host based > interfaces cannot guarantee the 31.25k rate, but testing is not > conclusive. No that is not true. Some sound cards do not have MIDI RX or TX interrupts and force the driver to go into polling mode which means either high CPU usage or low throughput. (see the soundblaster awe 64 midi out -> no tx irq -> midi sucks :-) ) With other cards like the Hoontech you get note-on midi-out-to-in latency that matches the wire speed. Looong time ago I wrote a tittle tool that does this http://www.gardena.net/benno/linux/mididelay-0.1.tgz Paul Davis and others (using decent midi cards) reported 1.1 msec note-on midi-out-to-in latency which matches the wire speed. On the SB AWE64 I got about 10 msec (due the lack of a TX irq) plus when midi out traffic is very heavy the machine slows down to a crawls (using OSS/Free). So before buying a MIDI card pay attention that both RX and TX irqs are present. But my prefered MIDI "cable" between two boxes will be DMIDI over 100Mbit LAN anyway. :-) PS: does sound my RAM sample module reasonable ? (taking into account the suggestions you guys gave here like Steve's modulation rate reduction etc). If there are not objections I'm going to write some code and make the sampling module "audible". :-) cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |