You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(8) |
Dec
|
|
From: Christian S. <chr...@ep...> - 2004-01-07 17:26:34
|
Es geschah am Mittwoch, 7. Januar 2004 22:19 als Claudio Mettler schrieb:
> Hi all
>
> I got ls from cvs. First problem was that the "install.sh" file is
> lacking, so i copied another install.sh from somwhere else. I think you
> need to add said file to the cvs.
No, 'make install' should do it.
> Well, the second problem i couldn't solve:
>
> -----8<-----
> $ make
> Making all in defines
> make[1]: Entering directory `/home/claudio/sw/linuxsampler/defines'
> make[2]: Entering directory `/home/claudio/sw/linuxsampler/defines'
> make[2]: Nothing to be done for `all-am'.
> make[2]: Leaving directory `/home/claudio/sw/linuxsampler/defines'
> make[1]: Leaving directory `/home/claudio/sw/linuxsampler/defines'
> Making all in drivers
> make[1]: Entering directory `/home/claudio/sw/linuxsampler/drivers'
> g++ -DPACKAGE=\"LinuxSampler\" -DVERSION=\"0.0.1\" -DHAVE_LIBASOUND=1
> -DSTDC_HEADERS=1 -I. -I. -DOSS_E
> ude -I/usr/include/sigc++-1.0 -I../defines -I../modules/mixer
> -I../modules/voice -c sound_driver.cpp
> In file included from ../modules/voice/voice.h:21,
> from ../modules/mixer/mixer_base.h:24,
> from sound_driver.h:22,
> from sound_driver.cpp:18:
> ../defines/typedefs.h:33: syntax error before `(' token
> ../defines/typedefs.h:36: `cacabuffer' was not declared in this scope
> ../defines/typedefs.h:36: `m_val' was not declared in this scope
> ../defines/typedefs.h:36: ISO C++ forbids declaration of `sprintf' with
> no type
> ../defines/typedefs.h:36: `int sprintf' redeclared as different kind of
> symbol
It seem you really messed it up. Remove the whole linuxsampler directory
and make a fresh new CVS checkout:
rm -r /home/claudio/sw/linuxsampler
mkdir /home/claudio/sw/linuxsampler
cd /home/claudio/sw/linuxsampler
cvs -d:pserver:ano...@cv...:/home/schropp/linuxsampler co linuxsampler
and then compile this way:
make -f Makefile.cvs && make
and finally to install it
make install
as usual.
CU
Christian
|
|
From: Claudio M. <cl...@si...> - 2004-01-06 20:17:49
|
Hi all
I got ls from cvs. First problem was that the "install.sh" file is
lacking, so i copied another install.sh from somwhere else. I think you
need to add said file to the cvs.
Well, the second problem i couldn't solve:
-----8<-----
$ make
Making all in defines
make[1]: Entering directory `/home/claudio/sw/linuxsampler/defines'
make[2]: Entering directory `/home/claudio/sw/linuxsampler/defines'
make[2]: Nothing to be done for `all-am'.
make[2]: Leaving directory `/home/claudio/sw/linuxsampler/defines'
make[1]: Leaving directory `/home/claudio/sw/linuxsampler/defines'
Making all in drivers
make[1]: Entering directory `/home/claudio/sw/linuxsampler/drivers'
g++ -DPACKAGE=\"LinuxSampler\" -DVERSION=\"0.0.1\" -DHAVE_LIBASOUND=1
-DSTDC_HEADERS=1 -I. -I. -DOSS_E
ude -I/usr/include/sigc++-1.0 -I../defines -I../modules/mixer
-I../modules/voice -c sound_driver.cpp
In file included from ../modules/voice/voice.h:21,
from ../modules/mixer/mixer_base.h:24,
from sound_driver.h:22,
from sound_driver.cpp:18:
../defines/typedefs.h:33: syntax error before `(' token
../defines/typedefs.h:36: `cacabuffer' was not declared in this scope
../defines/typedefs.h:36: `m_val' was not declared in this scope
../defines/typedefs.h:36: ISO C++ forbids declaration of `sprintf' with
no type
../defines/typedefs.h:36: `int sprintf' redeclared as different kind of
symbol
/usr/lib/gcc-lib/i686-pc-linux-gnu/3.2.2/include/stdio.h:310: previous
declaration of `int sprintf(char*, const char*, ...)'
../defines/typedefs.h:36: initializer list being treated as compound
expression
../defines/typedefs.h:37: parse error before `return'
In file included from ../modules/voice/voice.h:21,
from ../modules/mixer/mixer_base.h:24,
from sound_driver.h:22,
from sound_driver.cpp:18:
../defines/typedefs.h:42:1: warning: no newline at end of file
In file included from ../modules/voice/voice.h:25,
from ../modules/mixer/mixer_base.h:24,
from sound_driver.h:22,
from sound_driver.cpp:18:
../defines/sample_defs.h:32:7: warning: no newline at end of file
In file included from sound_driver.cpp:18:
sound_driver.h:76: ISO C++ forbids declaration of `string' with no type
sound_driver.h:76: `string' declared as a `virtual' field
sound_driver.h:76: parse error before `(' token
make[1]: *** [sound_driver.o] Error 1
make[1]: Leaving directory `/home/claudio/sw/linuxsampler/drivers'
make: *** [all-recursive] Error 1
-----8<-----
My gcc:
-----8<-----
$ gcc -v
Reading specs from /usr/lib/gcc-lib/i686-pc-linux-gnu/3.2.2/specs
Configured with: /var/tmp/portage/gcc-3.2.2-r2/work/gcc-3.2.2/configure
--prefix=/usr --bindir=/usr/i686-pc-linux-gnu/gcc-bin/3.2
--includedir=/usr/lib/gcc-lib/i686-pc-linux-gnu/3.2.2/include
--datadir=/usr/share/gcc-data/i686-pc-linux-gnu/3.2
--mandir=/usr/share/gcc-data/i686-pc-linux-gnu/3.2/man
--infodir=/usr/share/gcc-data/i686-pc-linux-gnu/3.2/info --enable-shared
--host=i686-pc-linux-gnu --target=i686-pc-linux-gnu --with-system-zlib
--enable-languages=c,c++,ada,f77,objc,java --enable-threads=posix
--enable-long-long --disable-checking --enable-cstdio=stdio
--enable-clocale=generic --enable-__cxa_atexit
--enable-version-specific-runtime-libs
--with-gxx-include-dir=/usr/lib/gcc-lib/i686-pc-linux-gnu/3.2.2/include/g++-v3
--with-local-prefix=/usr/local --enable-shared --enable-nls
--without-included-gettext
Thread model: posix
gcc version 3.2.2 20030322 (Gentoo Linux 1.4 3.2.2-r2)
-----8<-----
mfg
Claudio
BTW: Does anyone have a complete archive of this mailinglist as a tarred
maildir oder mbox file? I don't like the sf.net archive browser.
|
|
From: Rui N. C. <rn...@rn...> - 2004-01-06 08:55:28
|
Mark Knecht wrote: > > I guess you guys know what you're doing, but as a user I sure will look > forward to when you get back to adding needed capabilities to LS itself. > > This is pretty boring for those of us so C++ challenged! ;-) > Oh. Don't worry. This protocol interface thing will be just plain C, but it's in the main path to build an early GUI to linuxsampler. So, my tasks at hand are all about the LSCP prococol implementation (C API, code named as liblscp) and then a Qt GUI interface example to linuxsampler, in this particular order. None of these tasks will add any core features or capabilities to the linuxsampler engine. It will be just a path to interfere with it in a client/server fashion :) Until the server code is eventualy plugged into linuxsampler, all core engine development will continue quite independently from the LSCP implementation. That's why you should not worry about ;) Bye, -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: Mark K. <mar...@co...> - 2004-01-06 00:57:04
|
On Mon, 2004-01-05 at 16:09, Rui Nuno Capela wrote: <SNIP> > > Regards, I guess you guys know what you're doing, but as a user I sure will look forward to when you get back to adding needed capabilities to LS itself. This is pretty boring for those of us so C++ challenged! ;-) |
|
From: Rui N. C. <rn...@rn...> - 2004-01-06 00:12:33
|
Christian Schoenebeck wrote: > >> How about a shared memory access to the engine? > > Not at the moment, but might be added later. I don't see a big need for > this at the moment. Rui planned to write a C library first being just a > wrapper for the protcol, but that lib might be extended later with a > shared memory access method internally (in case frontend and LS are > running on the same host) without modifying the interface of the lib. > Yep. I'm working on the protocol C interface right now, but I'm a bit slow and still on the barebones, so don't expect anything fancy this week :) The way I'm seeing it, this interface may be sort of an abstraction layer for the protocol, so that in any foreseable future one can switch over other ipc infrastructure instead of inet sockets (e.g. shm), if just someone care to code it :) But right now I'll be stuck with the tcp/udp stuff. FYI this is taking the form of multithreaded client/server stuff; business as usual ;) Regards, -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: Christian S. <chr...@ep...> - 2004-01-05 23:45:59
|
Es geschah am Montag, 5. Januar 2004 16:31 als Juhana Sadeharju schrieb: > From: Christian Schoenebeck <chr...@ep...> > > > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-01.pdf > > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-01.sxw > > Please make these available so that anyone who starts browsing > at www.linuxsampler.org finds them. I downloaded the whole site > but the api directory was not downloaded because there are no > urls to these documents. Marek? Could you do this please? > How about a shared memory access to the engine? Not at the moment, but might be added later. I don't see a big need for this at the moment. Rui planned to write a C library first being just a wrapper for the protcol, but that lib might be extended later with a shared memory access method internally (in case frontend and LS are running on the same host) without modifying the interface of the lib. CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-01-05 23:33:46
|
Es geschah am Montag, 5. Januar 2004 18:16 als Jack O'Quin schrieb: > Christian Schoenebeck <chr...@ep...> writes: > > Es geschah am Donnerstag, 1. Januar 2004 03:55 als Jack O'Quin schrieb: > > > As for <atomic.h>, I don't know for certain that there are problems, > > > but I suspect that some of JACK's shared memory updates implicitly > > > depend on "strong ordering" of storage operations, which AFAIK is > > > guaranteed on existing x86 implementations, but certainly is not on > > > some other platforms (like PowerPC SMP, for example). I can't supply > > > any details, because this is just a suspicion. If I knew about a > > > definite bug, I'd fix it, or at least document it. > > > > atomicity.h is not an alternative for you at the moment, because > > there is no atomic_set() equivalent in atomicity.h which is needed > > for your ringbuffer.h > > Is this a guess? Do you know of a specific problem with ringbuffer.h? Hmmm, as you mention it, so far I thought Sparc SMP systems would lack atomicity while accessing 32 Bit words, but now looking at the respective part in atomic.h, where they use a spin lock byte in the data word itself, it seems that this lock byte is only needed if you're using atomic_set() and atomic_read() together with other atomic functions and as in rinbuffer.h only atomic_set() and atomic_read() are used, it seems you're right, that on normal GNU systems this is not needed (still only in regard to ringbuffer.h). Benno: so it seems we could really drop atomic.h in LS or have I disregarded something? CU Christian |
|
From: Juhana S. <ko...@ni...> - 2004-01-05 15:32:09
|
>From: Christian Schoenebeck <chr...@ep...> > > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-01.pdf > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-01.sxw Please make these available so that anyone who starts browsing at www.linuxsampler.org finds them. I downloaded the whole site but the api directory was not downloaded because there are no urls to these documents. How about a shared memory access to the engine? Juhana |
|
From: Christian S. <chr...@ep...> - 2004-01-03 00:06:10
|
Hi! I updated the protocol document: http://www.linuxsampler.org/api/draft-linuxsampler-protocol-01.pdf http://www.linuxsampler.org/api/draft-linuxsampler-protocol-01.sxw Major changes: - updated with Rui's proposals (Rui check if I forgot something) - Error responses now look like this: ERR:<errorcode>:<errormessage> That way frontends have an easier opportunity to e.g. show an error message already translated to a certain language just by doing a error code lookup - introduced warnings as possible return messages: WRN:<warningcode>:<warningmessage> A warning means the command was successful, but there are noteworthy issues to report, e.g. after a "LOAD INSTRUMENT" command, a warning might be returned because the deployed engine on that channel doesn't provide a certain feature to accurately playing back the instrument I have not added binary extensions so far. Should I? Again, send your suggestions for further improvements and corrections! CU Christian |
|
From: Christian S. <chr...@ep...> - 2004-01-01 18:42:48
|
Es geschah am Donnerstag, 1. Januar 2004 03:55 als Jack O'Quin schrieb: > Paul Davis <pa...@li...> writes: > > >> 3) i don't think JACK itself uses any atomic stuff at all anymore. > > >> could be wrong there. > > > > > >It doesn't, but it probably should for some platforms. > > > > really? off-hand, i can't think of anything that requires it. can you? > > Mantis bug:000008 documents a problem that I think is probably best > solved using <atomicity.h>, rather than <atomic.h>. But, it does not > seem to have very high priority at the moment. In general, I like > <atomicity.h> better because it is user-space code, taken from libc > and not the kernel. > > As for <atomic.h>, I don't know for certain that there are problems, > but I suspect that some of JACK's shared memory updates implicitly > depend on "strong ordering" of storage operations, which AFAIK is > guaranteed on existing x86 implementations, but certainly is not on > some other platforms (like PowerPC SMP, for example). I can't supply > any details, because this is just a suspicion. If I knew about a > definite bug, I'd fix it, or at least document it. |
|
From: Steve H. <S.W...@ec...> - 2004-01-01 15:29:21
|
On Tue, Dec 30, 2003 at 06:54:25PM +0100, be...@ga... wrote: > It depends what the user wants. > For example do we want changing the enveloping information in real time ? > Do other samplers permit this ? > We could precompute dozen (or 100) of linear envelope segments resembling > arbitrary curves and switch the tables in real time (at note-on time) without > any overhead. It must be faster to compute the linear segments than to read them from a table. > performance = 90% of life :-) > I'm still not convinced if we should go the log route. > Let's see what others say. Well, the envelope needs to be log/exp shaped, but I think linear segments is OK. - Steve |
|
From: Steve H. <S.W...@ec...> - 2004-01-01 15:28:15
|
On Tue, Dec 30, 2003 at 02:19:10PM -0300, Juan Linietsky wrote: > First, considering that a segment size is between 50 and 100 samples, > updating the values between segments and ramping linearly inbetween (being the > enveloped curved or not) will make absolutely NO audible difference than > using a lowpass (I've implemented this several times, and I composed a few > hundred of songs with the code :), and considering that this is the most > critical part of the whole app, I have to say that i'm against adding more > code in there. The segments are too small already, much smaller than the > delta times between envelope points. I think 50-100 samples is too few - for fast attachs it definatly wont be enough. I think 32 is quite a common value. > I also read somewhere in the thread that updating every a few (4) samples > would be a good idea, but I have to remind you that adding any > kind of conditional inside the critical loop reduces the performance > enormously in modern procesors, I believe this is because it stalls > the instruction pipeline. You can roll it in with the loop termination condition, but I'd agree that 4 is too often. Someone could benchmark it to find where theres a sweet spot. This would be a good candidate for SIMD instruction optimisation, to run 4 enevelopes at once. - Steve |
|
From: David O. <da...@ol...> - 2004-01-01 14:59:51
|
On Thursday 01 January 2004 15.48, Rui Nuno Capela wrote: > benno wrote: > > As said ASCII is elegant etc but does it pay off to waste > > network and cpu resources (and programming resources because > > ASCII requires parsers) ? > > > > I'd like to hear your opinions about ASCII vs binary from you > > guys. > > IMO assuming the lscp service is implemented on a different thread > than the midi/audio ones, the performance penalty of ASCII vs. > binary parsing is cheap nowadays. Yes, we are talking about hundreds of cycles per command, which is=20 practically nothing - *provided* there are only tens or maybe=20 hundreds of commands per second. So, what kind of traffic volumes are we expecting? What kind of=20 relation does the traffic volume have to the CPU power of the server=20 machine? (You probably wouldn't be chatting about the states of 128=20 channels if the server only has CPU power for 32 voices.) [...] > I would suggest that we first specify and implement on ASCII. If > someone would take the task, the binary dialect might be also an > alternative (e.g. via different tcp and udp ports). Makes sense. Let's not optimize before we're actually sure it could=20 make a difference. :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2004-01-01 14:51:45
|
On Wednesday 31 December 2003 20.49, Christian Schoenebeck wrote: [...] > Again, we can go a parallel way (ASCII an binary), but at least I > also like to be able to send plain ASCII commands. How about introducing two layers; pre and post lexer? That is, if you want more efficient parsing, at the expense of a=20 slightly more complicated interface, use tokens and binary values=20 instead of string identifiers and decimal numbers. We could have a per-connection switch for both directions, one switch=20 for each direction, or even per-command "signal bytes" (maybe in the=20 128..255 range) to select ASCII or binary requests and replies on a=20 per-command basis. I think the first option makes most sense (a=20 single switch per connection), but it might turn out handy to be able=20 to mix ASCII and binary commands. The parsing of a protocol like the proposed one can be made very=20 efficient (a few levels of nested switch()es or similar tree=20 structure), so I don't think a "pure" binary protocol would be much=20 faster. It's the lexing that burns cycles, especially if there are=20 many and/or user defined "keywords" that require symbol table lookups=20 for tokenization. Decimal ASCII parsing is also pretty expensive, at=20 least on weaker CPUs with slow divisions. I haven't looked hard into=20 optimizing that kind of stuff, but I'm quite sure binary doubles and=20 even MIDI style variable length integers or similar is faster than=20 decimal ASCII. //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Rui N. C. <rn...@rn...> - 2004-01-01 14:49:00
|
benno wrote:
> As said ASCII is elegant etc but does it pay off to waste
> network and cpu resources (and programming resources because ASCII
> requires parsers) ?
>
> I'd like to hear your opinions about ASCII vs binary from you guys.
IMO assuming the lscp service is implemented on a different thread than
the midi/audio ones, the performance penalty of ASCII vs. binary parsing
is cheap nowadays.
On the client side it is even cheaper, if the command and response strings
are reasonably and elegantly structured, as I think they're coming about.
Of course the binary would be much more CPU and network effective but
regarding general interoperability I certainly prefer the ASCII route :)
As I've said once before, I will/can try to wrap lscp by an higher-level
API which would be asynchronous and event oriented. It will be implemented
as liblscp.so (lscp would stand for LinuxSampler Control Protocol).
Christian Schoenebeck wrote:
>> Accordingly, some notification events would take some general format:
>>
>> CHANGE ENGINE {INFO|<engine-param-name>} <engine-name>
>> CHANGE CHANNEL {INFO|<channel-param-name>} <channel-number>
>
> Yes, that would be acceptable, at least from my side.
>
Glad you liked :)
>
> I definitely vote at least for ASCII commands. I want to be able to even
> telnet the LinuxSampler box in case. But we could add commands where
> LinuxSampler will response with a binary answer if the frontend prefers
> this, e.g.:
>
> GET CHANNEL BUFFER FILL_STATE_BINARY
>
Same with me. Following my rationale, this later command would be better
written generically as :
GET CHANNEL BUFFER_FILL_STATE {BYTES|PERCENTAGE|BINARY} <sampler-channel>
> Regarding the events e.g. for sliders. this is not that stressful.
> In case e.g. a effect setting has been changed by a MIDI slider for
> example, the engine will just send a small UDP event packet and we
> could even add the new value with this event packet:
>
> CHANGE CHANNEL EFFECT_DEPTH <sampler-channel> <new-value>
>
Quite reasonable! And on my generic syntax form:
CHANGE CHANNEL <channel-param-name> <sampler-channel>
[<channel-param-value>]
We're getting there... nirvana :)
>
> Again, we can go a parallel way (ASCII an binary), but at least I also
> like to be able to send plain ASCII commands.
>
> Other opinions?
I would suggest that we first specify and implement on ASCII. If someone
would take the task, the binary dialect might be also an alternative (e.g.
via different tcp and udp ports).
Happy new year!
--
rncbc aka Rui Nuno Capela
rn...@rn...
|
|
From: Mark K. <mar...@co...> - 2004-01-01 00:57:49
|
Christian, Hi. Everything is looking fine after adding these two hacks we came up with. I've tested the following files and they've all acted fine: (within reason and within the limits of LS) Bardstown Bosendorfer Piano LoFi Junkiez Drone Archeology Drones Sonic Implants - Arp Ensemble 1 Wizzo - Large Ambient Drums Digital Complete - Mixed Percussion 1 Glasstrax Tamborine Boris Hohner Clavinett Roland JX-3P Quark Lead Worra's Prophet - Clean Pad Bigga Giggas Orchestral Brass - Trumpet D-Sound Breathy Sax SAM Trumpet (Freebie) Bigga Giggas Trombone The next set of gigs have problems, but not caused by these code changes. Some I've reported before, some are new: Scarbee J-Fingered Bass - sound is completely noise - probably key switches are causing big problems. I don't you handle key switches at all, right? 02 Hybrid Strings 2 ECO - Causes segfault consistently!! Sonic Implants - B3 Organ - Seems stuck on a high vibrato setting All in all I'd say this is working quite well and would encourage you to add these two fixes to CVS. Happy New Year!!! Cheers, Mark |
|
From: Christian S. <chr...@ep...> - 2003-12-31 19:53:52
|
Es geschah am Mittwoch, 31. Dezember 2003 13:19 als Rui Nuno Capela schrieb:
> IMHO this leads to a more uniform and structured syntax that leaves open
> every other parameter one will come about in a near future--quite
> unavoidable isn't it? :)
>
> Accordingly, some notification events would take some general format:
>
> CHANGE ENGINE {INFO|<engine-param-name>} <engine-name>
> CHANGE CHANNEL {INFO|<channel-param-name>} <channel-number>
Yes, that would be acceptable, at least from my side.
Es geschah am Mittwoch, 31. Dezember 2003 12:22 als be...@ga... schrieb:
> As said ASCII is elegant etc but does it pay off to waste
> network and cpu resources (and programming resources because ASCII requires
> parsers) ?
The command grammar defined in the initial draft is so far simply of Chomsky
type 3, means it can simply be resolved from left to right. That's really,
really simple and the parser would thus be very simple to implement and
efficient. Ok, not as efficient as a binary protocol solution of course.
> For example client B could change parameters in linux sampler eg,
> reverb amounts, FX send levels continuously while client A (a GUI)
> display the faders which should be moving constantly.
> With ASCII we send a lot of data around need to parse it in to LS engine
> and then parse back in client A etc.
>
> I'd like to hear your opinions about ASCII vs binary from you guys.
I definitely vote at least for ASCII commands. I want to be able to even
telnet the LinuxSampler box in case. But we could add commands where
LinuxSampler will response with a binary answer if the frontend prefers this,
e.g.:
GET CHANNEL BUFFER FILL_STATE_BINARY
so the client doesn't has to parse a response ASCII string and will just get
(as you suggested) a binary array representation.
Regarding the events e.g. for sliders. this is not that stressful. In case
e.g. a effect setting has been changed by a MIDI slider for example, the
engine will just send a small UDP event packet and we could even add the new
value with this event packet:
CHANGE CHANNEL EFFECT_DEPTH <sampler-channel> <new-value>
The client doesn't have to send a command to get this information it will
automatically be informed by the engine and on the other side, LinuxSampler
doesn't have to parse such a "request-for-update" ASCII command.
Again, we can go a parallel way (ASCII an binary), but at least I also like to
be able to send plain ASCII commands.
Other opinions?
CU
Christian
|
|
From: Bonilla-Toledo, d. (DPS_MOLR1.1) <bon...@hp...> - 2003-12-31 18:57:31
|
Hi,
Wouldn't be faster ( for performance ) to use API and MIDI than a
network protocol?
If the goal is to be able to control a rack mounted linux sampler, from
somewhere else, there are already MIDI controllers for that purpose.
If the GUI is built with the rest of the code (with QT) , one could just
have the code send events through API to the GUI and have the GUI update
the position of knobs, sliders etc, based on what it receives.
That would also make it easier to have the code update the GUI when MIDI
control messages are received.
As an hypothetical example, if I had a slider in the GUI to control the
Master volume
#define MIDICONTRO_OUT_ENABLED 1
int whatver_the_main_volume_is_for_gui =3D 99;
...
QSlider * slider =3D new QSlider( Horizontal, this, "slider" );
slider->setRange( 0, 127 );
slider->setValue( get_current_mastervolume_orwhatever() );
...
And then the API could be something like:
int get_current_mastervolume_orwhatever()
{
if (MIDICONTROL_OUT_ENABLED){
// Have an API that echoes changes to the MIDI output.
// 1st arg midi control message and second arg is midi cc value. That
way the MIDI bus get whatever change was made.
Send_msg_to_midi_out ( 17 , whatver_the_main_volume_is_for_gui );=20
}
return ( whatver_the_main_volume_is_for_gui );
// will return the integer to set the position of the slider in the GUI.
}
Or something like that.
And then doing something similar to parse the MIDI in CC changes to the
GUI.
You could use a remote25 to control the sampler real-time and have a
sequencer record your performance. Just like a hw sequencer would.
How's that sound?
-Dave-
-----Original Message-----
From: lin...@li...
[mailto:lin...@li...] On Behalf Of Rui
Nuno Capela
Sent: Wednesday, December 31, 2003 5:20 AM
To: lin...@li...
Subject: Re: [Linuxsampler-devel] LinuxSampler control protocol (initial
draft)
Christian Schoenebeck wrote:
>
> Here is the promised initial draft for the LinuxSampler control=20
> protocol:
>
>
http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.pdf
>
http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.rtf
>
> I expect you to post your suggestions for improvements / corrections=20
> for the protocol. Sorry that I've not created a plain text version=20
> yet.
>
ACK, read and assimilated :)
My early comment would go on future extensibility. Thus the general
command syntax should be rather be a litle more consistent, e.g.
GET ENGINE {INFO|<engine-param-name>} <engine-name>
SET ENGINE <engine-param-name> <engine-name> <engine-param-value> ...
GET CHANNEL {INFO|<channel-param-name>} <channel-number>
SET CHANNEL <channel-param-name> <channel-number>
<channel-param-value> ...
Some engine or channel parameters might be read-only (GET), others
read-write (GET|SET). The INFO instruction would return the CRLF
separated list of read-only parameters, either for the given ENGINE or
CHANNEL respectively.
Note that on GET CHANNEL commands the <channel-number> is specified
_before_ the <channel-param-value>, which leaves open the possibility to
an array or ordered series of parameter values (look above at the
intended ellipsis notation).
IMHO this leads to a more uniform and structured syntax that leaves open
every other parameter one will come about in a near future--quite
unavoidable isn't it? :)
Accordingly, some notification events would take some general format:
CHANGE ENGINE {INFO|<engine-param-name>} <engine-name>
CHANGE CHANNEL {INFO|<channel-param-name>} <channel-number>
I hope you get the picture.
That was my EUR0.02 :)
CU-l8er
--=20
rncbc aka Rui Nuno Capela
rn...@rn...
-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills. Sign up for
IBM's Free Linux Tutorials. Learn everything from the bash shell to sys
admin. Click now! =
http://ads.osdn.com/?ad_id=3D1278&alloc_id=3D3371&op=3Dclick
_______________________________________________
Linuxsampler-devel mailing list Lin...@li...
https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel
|
|
From: Rui N. C. <rn...@rn...> - 2003-12-31 12:20:49
|
Christian Schoenebeck wrote: > > Here is the promised initial draft for the LinuxSampler control protocol: > > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.pdf > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.rtf > > I expect you to post your suggestions for improvements / corrections for > the protocol. Sorry that I've not created a plain text version yet. > ACK, read and assimilated :) My early comment would go on future extensibility. Thus the general command syntax should be rather be a litle more consistent, e.g. GET ENGINE {INFO|<engine-param-name>} <engine-name> SET ENGINE <engine-param-name> <engine-name> <engine-param-value> ... GET CHANNEL {INFO|<channel-param-name>} <channel-number> SET CHANNEL <channel-param-name> <channel-number> <channel-param-value> ... Some engine or channel parameters might be read-only (GET), others read-write (GET|SET). The INFO instruction would return the CRLF separated list of read-only parameters, either for the given ENGINE or CHANNEL respectively. Note that on GET CHANNEL commands the <channel-number> is specified _before_ the <channel-param-value>, which leaves open the possibility to an array or ordered series of parameter values (look above at the intended ellipsis notation). IMHO this leads to a more uniform and structured syntax that leaves open every other parameter one will come about in a near future--quite unavoidable isn't it? :) Accordingly, some notification events would take some general format: CHANGE ENGINE {INFO|<engine-param-name>} <engine-name> CHANGE CHANNEL {INFO|<channel-param-name>} <channel-number> I hope you get the picture. That was my EUR0.02 :) CU-l8er -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: <be...@ga...> - 2003-12-31 12:20:14
|
Scrive David Olofson <da...@ol...>: > > Well, that's really more about scalability and accuracy; with a sample=20 > accurate event system, you can put ramps (of whatever kind you may=20 > have) exactly where you want them, and the "new ramp" calculations=20 > hit you only when you actually start a new ramp. If you're using=20 > linear ramps only, the code in the inner DSP loop is very simple and=20 > fast, but depending on what you're playing, you *may* get extra=20 > overhead due to the greater number of ramp events needed at times. Agreed, but I think that the density of linear ramp segments is relatively low even in the most complex cases. I mean: I guess it's impossible to hear the difference between exponential envelope that is rendered sample for sample and one that is made up of linear segments that are let's say 8 samples long. As said our event system is not limited, we can even change values EVERY sample if we need to but I guess it would be totally overkill because you are entering the AM/FM modulation domain then :-). My only concern is if we want envelopes whose shape and characteristics can be changed by realtime user inputs (eg a MIDI controller). To keep overhead low we would probably need to precompute such envelopes and the limitation could be if there are too many variables that the number of tables gets too big. But these are extreme cases. Perhaps you could even compute them in real time since MIDI CC events occur relatively seldom compared to the samplerate. > > Basically, an event system has a low, fixed cost in the DSP units that=20 > use it. It does not have the accuracy limitations of systems with a=20 > control rate lower than the audio sample rate, nor does it have the=20 > higher fixed cost of converting control values to internal audio rate=20 > "control change coefficients" eveny N audio frames. Yes, I'm against such systems too. We want sample accurate events because we want future interoperability with sequencers that can send sample accurate events and we want LS to provide decent synth-like modulation stuff. Think about the dynamic DSP recompiler. I guess users will come up with very interesting "synths" made out of LS modules. Perhaps in future the "Sampler" word in LinuxSampler will not be appropriate anymore. Who knows, we will see .... :-) > > So, in short, I'd recommend an event system regardless of what kind of=20 > ramping is used, for any serious system that doesn't use audio rate=20 > control streams. Any fixed lower control rates will cause trouble in=20 > some situation, forcing sound programmers to use prerendered samples=20 > for trivial things like short attack noises and the like. As said pure sampling is nice and you can reproduce natural instruments faithfully by using very large multisamples (many velocity layers plus sampling each note/key), but for electronic instrument stuff etc it's much better if samples can be modified and shaped by a good modulation engine. you know .... total world domination in the sampler domain requires high quality engines :-) > > OTOH, this is probably not a major issue in a sampler. A virtual=20 > analog synth *must* have very accurate envelope timing for serious=20 > percussive sounds, but on a sampler, you tend to use envelopes and=20 > effects to tweak the timbre of complex sounds, rather than to create=20 > new sounds from very simple waveforms. A fixed control rate might=20 > work, but it would make LinuxSampler more specifically a sampler,=20 > without much scalability into "real" synthesis. That is exactly my point. I'm against fixed control rate and other quick-n-dirty tradeoffs I guess you guys are too. > > Note that Ian points out, non-linear segments aren't all that=20 > expensive. However, reverse calculations, splitting, combining and=20 > otherwise manipulating streams of "ramp" events, gets more=20 > complicated with non-linear functions, so in some cases, the=20 > reduction of event density at the targets may not be worth the=20 > increased complexity in other places. Exactly. > > If only performance matters, I guess non-linear ramps could be faster=20 > than linear only, but I suspect the code would be a lot more=20 > complicated, if we are to do things like combining and manipulating=20 > event streams. Not sure, though. It's a balance thing. If there isn't=20 > too much code that's affected, a more complex data format might pay=20 > off. As said as soon as you have less than one ramping event every 8-10 samples (assuming that the there is no audible difference between envelopes rendered for each sample) then it pays off to use linear ramps because the difference between linear and higher order ones is so small. Plus if you have an event every 8 samples the overhead compared to having an event let's say every 20-30 samples (because using higher order ramps require a lower event density) is low since "most of times" there will not be an event that needs to be handled (at max 1 every 8 samples). > > > > Keep in mind it must for example handle following tasks: > > assume there is a pitch envelope running (composed of linear > > segments). Now the user operates the pitchbender. > > The real time event is timestamped and delayed till the next audio > > fragment gets processed. If there is already an envelope running > > the pitchbender needs to "add" his own pitch to the current one > > possibly using an event. for example if we want two pitch envelopes > > modulating the same sample what's the best way to do it ? > > Right; this is where linear segments become a bit easier. > > However, note that not even linear segments are trivial in a system=20 > where the ramps of streams to combine can start and end at any time.=20 > There's also a risk of event density exploding, if you combine too=20 > many streams, as the normal solution would be to just split every=20 > time a new ramp event comes in from either source. If there is a risk=20 > that you'll frequently be combining streams with high density, you'll=20 > need to take care of this one way or another. Meanwhile, this problem=20 > does not exist in a fixed control rate system. Yes this is one of the disadvantages of event based systems. Perhaps Christian's proposed bidimensional event arrray is a good solution ? The only thing I worry about is the time it takes to figure out if there are events pending. With the event list you know the samples_to_next_event value and can simply skip over it in the inntermost audio loop thus when no events occur there is no CPU overhead. Christian any idea if the samples_to_next_event can be applied to your bidimensional event array too ? > > Hybrid solution: Dynamic control rate. Use timestamped events, but try=20 > to keep events locked to a common heartbeat as far as possible, so=20 > you get events from multiple inputs simultaneously most of the time.=20 > You could even require that all plugins in a sub-net use the same=20 > heartbeat at all times, so your event processors will only ever see=20 > simultaneous events. Interesting but I guess it makes the system even more complex perhaps not buying us much in terms of performance. I don't know ... we would need some realworld benchmarks to see what's better for us. > > I guess one could come up with hybrids from all over the scale between=20 > fixed control rate and timestamped events. > > > > Since pitch envelopes are made of "deltas" in theory one could > > simply add up the deltas when events come in. > > Yes - if the adding is done where events turn into parameters for some=20 > DSP code. However, if you have dedicated event processing plugins,=20 > the normal action would be to update the internal deltas and generate=20 > new events. Thus, every input event results in one output events.=20 > Even eliminating doubles with the same timestamp requires extra work.=20 > (Not much, though. Just keep a flag to trig the generation of the=20 > output event before checking the # of frames to the next input=20 > event.) > > > > eg ... if there is only one envelope then the events should > > overwrite the current pitch delta but if you want to mix two or > > more envelopes then deltas should added up (basically you could > > calculate the delta of the delta between events and simply add up > > that to the current delta. That way AFAIK it should work with one > > single and multiple active envelopes. > > Am I missing something ? :-) > > Well, if you're going to do this in a seriously useful way, I don't=20 > think you can do it at the delta level near the DSP code. Combining=20 > controls usually includes some scaling as well as adding. More=20 > importantly, unless you want to hardwire the control routing, you'll=20 > proper nets of event processor plugins, rather than support for=20 > multiple outputs to one input. We will see, I think that after some discussions, we can come up with a good tradeoff that is both performant, flexible and provided high audio quality. cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Rui N. C. <rn...@rn...> - 2003-12-31 11:23:30
|
Christian Schoenebeck wrote: > Hi! > > Here is the promised initial draft for the LinuxSampler control protocol: > > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.pdf > http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.rtf > > I expect you to post your suggestions for improvements / corrections for > the protocol. Sorry that I've not created a plain text version yet. ACK. My first comment goes to literal syntax of some commands, as for making them a bit more structured-wise. It resumes to dropping a underscore character, in favor to a space one. For example, on "3.8 Getting channel information", having the syntax: GET CHANNEL_INFO <sampler-channel> it should be: GET CHANNEL INFO <sampler-channel> Like wise, I'll suggest: 3.9 GET CHANNEL_VOICE_COUNT -> GET CHANNEL VOICE_COUNT 3.10 GET CHANNEL_STREAM_COUNT -> GET CHANNEL STREAM_COUNT 3.11 GET CHANNEL_BUFFER_FILL BYTES -> GET CHANNEL BUFFER_FILL BYTES 3.12 GET CHANNEL_BUFFER_FILL PERCENTAGE -> GET CHANNEL BUFFER_FILL PERCENTAGE I hope you get th picture. Thus get general format would be: -- rncbc aka Rui Nuno Capela rn...@rn... |
|
From: <be...@ga...> - 2003-12-31 11:22:33
|
Hi, I briefly browsed the protocol looks interesting.
Some comments about performance:
While ASCII is nice, human readable and endian neutral it's
sometimes a bit too heavy for a realtime protocol.
While I agree that some commands like "load new sample" etc
are not time critical, other kind of commands like
"client requests the buffer fill status of 200 buffers (200 active voices)"
which might be issued 30 times/sec (to produce a fluid animation of
fillstatus idicators) could add some overhead in both the client and
server not to mention that it requires an ASCII parser which makes the
code more complex.
On the other hand if you use a binary format, transfering the fill status
of buffers is just a matter of sending an array of ints (with some small
header) to the network.
The client code is simple too: just read the array from network and start
accessing it.
Ok if the server is little endian and the client GUI runs on a big engian
box then byteswapping is needed, but this could be handled easily by a macro.
typedef {
short cmdtype; // type of command
short datalength; // length of the payload (can be 0)
} cmd_header_t;
enum {
CMD_GET_FILLSTATUS,
CMD_FILL_STATUS
};
typedef {
cmd_header_t cmdheader;
} cmd_get_fillstatus_t;
typedef {
cmd_header_t cmdheader;
int fill_amount[MAXVOICES];
} cmd_fillstatus_t;
when the client wants to request the fillstatus it sends
cmd_get_fillstatus
and sets
cmdheader.cmdtype=CMD_GET_FILLSTATUS
cmdheader.datalength=0 (because there is no payload attached).
the datalength field can be used by server or client to read
the right amount of data or to skip over unknown commands.
the server simply sits in a loop reading
sizeof(cmd_header_t) bytes from network.
then it looks at cmd_header_t.cmdtype and performs actions based
on the type of command.
For example when it sees CMD_GET_FILLSTATUS
it simply sends back a CMD_FILLSTATUS packet
(with attached payload that contains the buffer fill values).
As you see it's very simple and what's more important
extremely efficient in both terms of CPU usage (no complex parsing required) and
network bandwidth utilization.
You can send thousands of commands per second without using sensible
amounts of CPU on the server or client.
You can define easy to use macros like this
#define set_cmd_get_fillstatus(a) (a)->cmdtype=MCMD_MP_START_PLAY; (a)->datalength=0
that way to send the CMD_GET_FILLSTATUS the client can simply do:
cmd_get_fillstatus_t cmd;
set_cmd_get_fillstatus(&cmd);
write(socketfd, &cmd, sizeof(cmd));
Same for the server side responses.
As said ASCII is elegant etc but does it pay off to waste
network and cpu resources (and programming resources because ASCII requires
parsers) ?
For example client B could change parameters in linux sampler eg,
reverb amounts, FX send levels continuously while client A (a GUI)
display the faders which should be moving constantly.
With ASCII we send a lot of data around need to parse it in to LS engine
and then parse back in client A etc.
I'd like to hear your opinions about ASCII vs binary from you guys.
cheers,
Benno
Scrive Christian Schoenebeck <chr...@ep...>:
> Hi!
>
> Here is the promised initial draft for the LinuxSampler control protocol:
>
> http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.pdf
> http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.rtf
>
> I expect you to post your suggestions for improvements / corrections for the
>
> protocol. Sorry that I've not created a plain text version yet.
>
> CU
> Christian
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: IBM Linux Tutorials.
> Become an expert in LINUX or just sharpen your skills. Sign up for IBM's
> Free Linux Tutorials. Learn everything from the bash shell to sys admin.
> Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
> _______________________________________________
> Linuxsampler-devel mailing list
> Lin...@li...
> https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel
>
-------------------------------------------------
This mail sent through http://www.gardena.net
|
|
From: Christian S. <chr...@ep...> - 2003-12-31 07:59:05
|
Hi! Here is the promised initial draft for the LinuxSampler control protocol: http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.pdf http://www.linuxsampler.org/api/draft-linuxsampler-protocol-00.rtf I expect you to post your suggestions for improvements / corrections for the protocol. Sorry that I've not created a plain text version yet. CU Christian |
|
From: David O. <da...@ol...> - 2003-12-30 19:42:05
|
On Tuesday 30 December 2003 10.26, be...@ga... wrote: [...] > David (Olofson), what do you suggest ? Using an event system > similar to the one you use in audiality ? Well, that's really more about scalability and accuracy; with a sample=20 accurate event system, you can put ramps (of whatever kind you may=20 have) exactly where you want them, and the "new ramp" calculations=20 hit you only when you actually start a new ramp. If you're using=20 linear ramps only, the code in the inner DSP loop is very simple and=20 fast, but depending on what you're playing, you *may* get extra=20 overhead due to the greater number of ramp events needed at times. Basically, an event system has a low, fixed cost in the DSP units that=20 use it. It does not have the accuracy limitations of systems with a=20 control rate lower than the audio sample rate, nor does it have the=20 higher fixed cost of converting control values to internal audio rate=20 "control change coefficients" eveny N audio frames. So, in short, I'd recommend an event system regardless of what kind of=20 ramping is used, for any serious system that doesn't use audio rate=20 control streams. Any fixed lower control rates will cause trouble in=20 some situation, forcing sound programmers to use prerendered samples=20 for trivial things like short attack noises and the like. OTOH, this is probably not a major issue in a sampler. A virtual=20 analog synth *must* have very accurate envelope timing for serious=20 percussive sounds, but on a sampler, you tend to use envelopes and=20 effects to tweak the timbre of complex sounds, rather than to create=20 new sounds from very simple waveforms. A fixed control rate might=20 work, but it would make LinuxSampler more specifically a sampler,=20 without much scalability into "real" synthesis. > For performance reasons I'd opt for a system that uses only linear > segments for both volume and pitch enveloping. > exp curves and other kind of envelopes can be easily simulated by > using a serie of linear segments. Note that Ian points out, non-linear segments aren't all that=20 expensive. However, reverse calculations, splitting, combining and=20 otherwise manipulating streams of "ramp" events, gets more=20 complicated with non-linear functions, so in some cases, the=20 reduction of event density at the targets may not be worth the=20 increased complexity in other places. If only performance matters, I guess non-linear ramps could be faster=20 than linear only, but I suspect the code would be a lot more=20 complicated, if we are to do things like combining and manipulating=20 event streams. Not sure, though. It's a balance thing. If there isn't=20 too much code that's affected, a more complex data format might pay=20 off. > Keep in mind it must for example handle following tasks: > assume there is a pitch envelope running (composed of linear > segments). Now the user operates the pitchbender. > The real time event is timestamped and delayed till the next audio > fragment gets processed. If there is already an envelope running > the pitchbender needs to "add" his own pitch to the current one > possibly using an event. for example if we want two pitch envelopes > modulating the same sample what's the best way to do it ? Right; this is where linear segments become a bit easier. However, note that not even linear segments are trivial in a system=20 where the ramps of streams to combine can start and end at any time.=20 There's also a risk of event density exploding, if you combine too=20 many streams, as the normal solution would be to just split every=20 time a new ramp event comes in from either source. If there is a risk=20 that you'll frequently be combining streams with high density, you'll=20 need to take care of this one way or another. Meanwhile, this problem=20 does not exist in a fixed control rate system. Hybrid solution: Dynamic control rate. Use timestamped events, but try=20 to keep events locked to a common heartbeat as far as possible, so=20 you get events from multiple inputs simultaneously most of the time.=20 You could even require that all plugins in a sub-net use the same=20 heartbeat at all times, so your event processors will only ever see=20 simultaneous events. I guess one could come up with hybrids from all over the scale between=20 fixed control rate and timestamped events. > Since pitch envelopes are made of "deltas" in theory one could > simply add up the deltas when events come in. Yes - if the adding is done where events turn into parameters for some=20 DSP code. However, if you have dedicated event processing plugins,=20 the normal action would be to update the internal deltas and generate=20 new events. Thus, every input event results in one output events.=20 Even eliminating doubles with the same timestamp requires extra work.=20 (Not much, though. Just keep a flag to trig the generation of the=20 output event before checking the # of frames to the next input=20 event.) > eg ... if there is only one envelope then the events should > overwrite the current pitch delta but if you want to mix two or > more envelopes then deltas should added up (basically you could > calculate the delta of the delta between events and simply add up > that to the current delta. That way AFAIK it should work with one > single and multiple active envelopes. > Am I missing something ? :-) Well, if you're going to do this in a seriously useful way, I don't=20 think you can do it at the delta level near the DSP code. Combining=20 controls usually includes some scaling as well as adding. More=20 importantly, unless you want to hardwire the control routing, you'll=20 proper nets of event processor plugins, rather than support for=20 multiple outputs to one input. //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Paul D. <pa...@li...> - 2003-12-30 17:54:56
|
>> >there? Ok, it doesn't look good to include kernel headers in user space >> >applications, but on the other hand the user might already have a 'better' >> >version of atomic.h by using a more recent kernel version. >> >> 1) correct, including kernel headers is not OK in a user space app >> 2) if there are any improvements to be made to atomic.h for any >> particular architectures, we'll be happy to merge the in >> 3) i don't think JACK itself uses any atomic stuff at all anymore. could >> be wrong there. > >It doesn't, but it probably should for some platforms. really? off-hand, i can't think of anything that requires it. can you? |