You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Christian S. <chr...@ep...> - 2002-11-25 23:21:43
|
Es geschah am Sonntag, 24. November 2002 22:25 als Benno Senoner schrieb: > Not sure if ints are faster than bools, but OTOH I agree with you that > bools are more elegant. (somethimes I write inelegant code, sorry). I wasn't able to determine any performance benefit of it, nevertheless at least a typef won't hurt. > > Is there any benefit of using this double_to_int() function > > [audiovoice.cpp] and the asm code in it, instead of a normal type cast? > > See Steve's reply, this hack is needed only on x86, PPCs and most of the > other archs can use simple casts. I compared both and got exactly the same results wether asm or normal type cast and wether integral or rational number (Athlon). But the performance is in fact a point. > > I was a bit surprised about the ring buffer. I expected a buffer similar > > to the ones which are used in ASIO or Gigasampler. These consume less CPU > > time but on the other hand of course, latency values are dependent on the > > buffer size. How about giving the user the choice which kind of buffer to > > use? > > What does mean expected buffers similar to ASIO etc ? Can you elaborate > ? AFAIK ASIO uses two buffers which are isolated. One buffer is only accessed for wether reading or writing, never both at the same time. So buffer A gets filled by the disk stream while buffer B is read by the audio thread. After that buffer B gets filled while buffer A is read and so on... The time for one period is fixed by the buffer size, sampling rate and audio channels. Somebody already posted an article about that, but I haven't found it anymore. The advantage of those simple buffers is that you can always expect a fixed buffer size for reading/writing at any time. Whereas with you approach, you always have to check how many bytes are left to read/write and if you have to continue from the beginning of the buffer after accessing some pieces at the end. That's why your way is a bit more CPU hungry but circumvents those latency problems ASIO and Gigasampler has (latency fixed to the buffer size and latency jitter or even higher latency to correct that jitter). > The ringbuffer is very very efficient, basically the audio thread can > access the ringbuffer as it were a linear sample, at the end of the > processing of a fragment you simply do read_ptr = (read_ptr + > fragment_size) & (ringbuf_size-1); (ringbuf_size is a power of two to > avoid relatively slow % (modulus) ops). Yes I found the latter a very clever and efficient trick to keep the pointer within the boundaries. > If you you have a more efficient scheme in mind please tell us. > The final latency does not depend on the ringbuffer size since this > buffer is only for compensating disk latency. Of course not, I meant those ASIO/Giga Buffers. > On the audio side we simply use regular fragment-based audio output. > This means usually only 2-3 fized sized buffers so no ring buffers are > involved. I guess you mean these audio_sum, audio_buf arrays [audiothread.cpp] to calculate the mix and send it to the audio device, right? > > And finally I was a little bit unsure about the purpose of the > > check_wrap() method in the RingBuffer template. It ensures a sequential > > reading of 2*WRAP_ELEMTS = 2048 elements, right??? > > If so, shouldn't there a be a check if there are enough elements > > available for reading? > > The check_wrap is suboptimal since it checks for when it is about time > to replicate some data past the buffer end so that the interpolator can > fetch samples past the "official ring buffer end boundary". Why is this inefficient? Do you mean because of the memcpy() in check_wrap() that copies a portion within the buffer? > This check can be moved within the disk thread which accesses the buffer > with a much lower frequency and where it is easy to figure out when the > last read reached the ring buffer end position. I'm not sure what you're getting at. Do you mean that it's more likely that read/write access to the buffer won't interfere/overlap, because the audio thread reads faster than the disk buffer can fill up the space, due to the higher priority of the audio thread? Regards, Christian |
From: Benno S. <be...@ga...> - 2002-11-24 21:18:46
|
On Sat, 2002-11-23 at 22:00, Christian Schoenebeck wrote: > > Is there a reason that int types were always used for actually bool types or > is it just some kind of habit? Not sure if ints are faster than bools, but OTOH I agree with you that bools are more elegant. (somethimes I write inelegant code, sorry). > > Is there any benefit of using this double_to_int() function [audiovoice.cpp] > and the asm code in it, instead of a normal type cast? See Steve's reply, this hack is needed only on x86, PPCs and most of the other archs can use simple casts. > > I was a bit surprised about the ring buffer. I expected a buffer similar to > the ones which are used in ASIO or Gigasampler. These consume less CPU time > but on the other hand of course, latency values are dependent on the buffer > size. How about giving the user the choice which kind of buffer to use? What does mean expected buffers similar to ASIO etc ? Can you elaborate ? The ringbuffer is very very efficient, basically the audio thread can access the ringbuffer as it were a linear sample, at the end of the processing of a fragment you simply do read_ptr = (read_ptr + fragment_size) & (ringbuf_size-1); (ringbuf_size is a power of two to avoid relatively slow % (modulus) ops). If you you have a more efficient scheme in mind please tell us. The final latency does not depend on the ringbuffer size since this buffer is only for compensating disk latency. The bigger this ringbuffer the smaller the risk of ending up in an "empty disk voice buffer" situation. The disk buffer size can be smaller for low-msec disk access times. On the audio side we simply use regular fragment-based audio output. This means usually only 2-3 fized sized buffers so no ring buffers are involved. > > And finally I was a little bit unsure about the purpose of the check_wrap() > method in the RingBuffer template. It ensures a sequential reading of > 2*WRAP_ELEMTS = 2048 elements, right??? > If so, shouldn't there a be a check if there are enough elements available > for reading? The check_wrap is suboptimal since it checks for when it is about time to replicate some data past the buffer end so that the interpolator can fetch samples past the "official ring buffer end boundary". This check can be moved within the disk thread which accesses the buffer with a much lower frequency and where it is easy to figure out when the last read reached the ring buffer end position. (in that case read the data from disk and write it at ring buffer start and replicate some samples past the ring buffer end). If you have more question or if some of the issues are not clear please let us know on the mailing list. cheers, Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-24 00:24:14
|
On Sat, Nov 23, 2002 at 10:00:21 +0100, Christian Schoenebeck wrote: > Is there a reason that int types were always used for actually bool types or > is it just some kind of habit? Ints are generally the fastest types to access, so using them for bools makes sense. > Is there any benefit of using this double_to_int() function [audiovoice.cpp] > and the asm code in it, instead of a normal type cast? Casting to int rounds down to the nearest int (which isn't always what you want) and is very slow on x86. It reuires flushing the register stack reseting the fpunit's parameters, doing the conversion, flushing the stack... The other questions are specific to evo, so I cant help you there. - Steve |
From: Christian S. <chr...@ep...> - 2002-11-23 20:58:50
|
I read the evo-0.0.6 code to become acquainted with the basics involved in this project and audio applications in general (as I've never written an audio app before). I appreciate that the code isn't that complex, so you can easily get an overview of the most relevant parts. It left me with some minor questions however, that I hope somebody can explain to me (I guess that's most likely Benno but if anybody else can tell me too... :) Is there a reason that int types were always used for actually bool types or is it just some kind of habit? Is there any benefit of using this double_to_int() function [audiovoice.cpp] and the asm code in it, instead of a normal type cast? I was a bit surprised about the ring buffer. I expected a buffer similar to the ones which are used in ASIO or Gigasampler. These consume less CPU time but on the other hand of course, latency values are dependent on the buffer size. How about giving the user the choice which kind of buffer to use? And finally I was a little bit unsure about the purpose of the check_wrap() method in the RingBuffer template. It ensures a sequential reading of 2*WRAP_ELEMTS = 2048 elements, right??? If so, shouldn't there a be a check if there are enough elements available for reading? Regards, Christian |
From: Steve H. <S.W...@ec...> - 2002-11-21 09:24:58
|
On Thu, Nov 21, 2002 at 03:06:26AM -0300, Juan Linietsky wrote: > I think i dont understand the concept of zippernoise, isnt it the one that occurs > when setting the resonance too high? In that case I dont see why should that be avoided, > since it's common in any synth i've tried.. I think youre thinking of self resonance (the oscilator like tune you get from high resonance and non zero input). zipper noise is an anoying buzzing sound you get when the recalcuation or interpolation steps are too far apart, caused by discontinuities in the amplitude curve, or sometimes first derivative of the amplitude. - Steve |
From: Juan L. <co...@re...> - 2002-11-21 06:06:31
|
On Sat, 16 Nov 2002 10:43:36 -0000 "Paul Kellett" <pau...@ma...> wrote: > I'm replying to things in the digest - sorry I'm out of sync with everyone > else...! > > > Benno Senoner <be...@ga...> wrote: > > > > Regarding adding a let's say resonant LP filter to the sample output: > > I guess the coefficients need to be smoothed out in some way too, > > otherwise zipper noise will probably show up again. > > Filter cutoff is much less sensitive to zipper nose than amplitude, and > for typical sampler patches, updating the filter coefficients every 32 > or 64 samples is ok, with no interpolation. The only time this is a > problem is for fast, high-resonance filter "chirps", but it still sounds > acceptable, just not as good as an analog or virtual analog synth. > > I think for a sampler, this isnt something much to worry about, I'd just put focus on speed and quality instead of features :) > > What would a good tradeoff for achieving low-CPU usage zippernoise-free > > filter modulation would be ? (some kind of interpolation of precomputed > > coefficients ?) > > Use a state variable filter. Almost every other hardware and software > synth does, and you can adjust cutoff and resonance directly without > worrying about stability, or any heavy calculations. > > I think i dont understand the concept of zippernoise, isnt it the one that occurs when setting the resonance too high? In that case I dont see why should that be avoided, since it's common in any synth i've tried.. > > Steve Harris <S.W...@ec...> wrote: > > > > BTW do we know for sure than samplers use exponential envelopes? I guess > > we need linear ones too, but they are easy to implement. > > Decay and release should be exponential. It's nice to offer a choice of > linear or other curves (mostly useful for drum sounds) but for most > instruments, linear decay/release sounds bad - the fade out is too sudden. > But the ear is quite tolerant if the curves are only approximately > exponential. > > The problem with exponential release envelopes is that they take too much to fade, and that can leaves voices hanging and consuming CPU for large periods where they are not or almost not being heard. For this I think the out-of-voices-removing algorithm should be given good voice priority values to work with. And yes, exponential attack envelopes make it sound like the voices take some time to start, i noticed this in synths like the DX7. > > > get some recordings from samples of high freq sinewaves with envelopes. > > I think Frank N. has done this allready for the S2000, Frank are you on > > the list? > > It's better to use square waves, then the shape of the envelope (for > example if there are sharp corners between each stage) is not hidden by > the shape of the waveform. > Squares are so cool :) I use them to check the filter response and variation a lot. > > Benno Senoner <be...@ga...> wrote: > > > > It is probably the easiest to keep track of the previous pitch value > > and then simply interpolate linearly to the new value within a let's say > > 0.5 - 1 msec time frame. I think this would smooth out things nicely, > > right ? > > > > (Steve, others, do you agree ?) > > Agree. But a lot of the time, where the envelopes and any LFOs are > changing slowly, you may not even need to interpolate pitch and filter > cutoff, just step every 1ms. unless the LFO is timed _very_ fast (which usually beats the purpose of the L in it :) There is not any need to interpolate the pitch between resampling fragments, because you will definitively not notice it, just think that it never does any crazy jumps in the sample, simply speedsup/slowdown so there isnt really a way that artifacts can pop up. BTW! I've been covered with work lately, i need to find some time to finish up and upload a working skeleton program! Cheers Juan Linietsky |
From: Matthias W. <mat...@in...> - 2002-11-16 17:43:28
|
On Fri, Nov 15, 2002 at 07:22:35PM -0800, Josh Green wrote: > > AFAIK there are no r/w locks in the current linuxthread implementation. > > I read about plans to include those in connection with the NPT (native > > posix threads). > > > > matthias > > > > What about pthread_rwlock_t (you can see this in > /usr/include/pthread.h). I notice that the function prototypes are > surrounded by an "#ifdef __USE_UNIX98" which I'm unfamiliar with. I can > do a "p sizeof (pthread_rwlock_t)" in gdb with a pthread linked program, > so it seems to be available (32 bytes a mutex, not bad I guess). Cheers. > Josh Green I played around with this when I needed 'pthread_mutexattr_settype' and found out that I had to define #define _XOPEN_SOURCE 600 before the first system include in my code; this will set some other defines in /usr/include/features.h . I have to note that lots of basic types (I guess one of it was size_t) got screwed up so I had to drop it again. Besides, I'd try to use atomic types and atomic read/writes to avoid locking wherever possible. matthias |
From: Paul K. <pau...@ma...> - 2002-11-16 10:44:23
|
I'm replying to things in the digest - sorry I'm out of sync with everyone else...! Benno Senoner <be...@ga...> wrote: > > Regarding adding a let's say resonant LP filter to the sample output: > I guess the coefficients need to be smoothed out in some way too, > otherwise zipper noise will probably show up again. Filter cutoff is much less sensitive to zipper nose than amplitude, and for typical sampler patches, updating the filter coefficients every 32 or 64 samples is ok, with no interpolation. The only time this is a problem is for fast, high-resonance filter "chirps", but it still sounds acceptable, just not as good as an analog or virtual analog synth. > What would a good tradeoff for achieving low-CPU usage zippernoise-free > filter modulation would be ? (some kind of interpolation of precomputed > coefficients ?) Use a state variable filter. Almost every other hardware and software synth does, and you can adjust cutoff and resonance directly without worrying about stability, or any heavy calculations. Steve Harris <S.W...@ec...> wrote: > > BTW do we know for sure than samplers use exponential envelopes? I guess > we need linear ones too, but they are easy to implement. Decay and release should be exponential. It's nice to offer a choice of linear or other curves (mostly useful for drum sounds) but for most instruments, linear decay/release sounds bad - the fade out is too sudden. But the ear is quite tolerant if the curves are only approximately exponential. > get some recordings from samples of high freq sinewaves with envelopes. > I think Frank N. has done this allready for the S2000, Frank are you on > the list? It's better to use square waves, then the shape of the envelope (for example if there are sharp corners between each stage) is not hidden by the shape of the waveform. Benno Senoner <be...@ga...> wrote: > > It is probably the easiest to keep track of the previous pitch value > and then simply interpolate linearly to the new value within a let's say > 0.5 - 1 msec time frame. I think this would smooth out things nicely, > right ? > > (Steve, others, do you agree ?) Agree. But a lot of the time, where the envelopes and any LFOs are changing slowly, you may not even need to interpolate pitch and filter cutoff, just step every 1ms. > Ok but assume these nice LP filter wah-wah effects. > The LFO frequency in this case is up to a few Hz, but the modulation > frequency (filter coefficient change rate) ? How low can it go so that > zipper noise can be avoided ? There is no simple answer to this - you will hear zipper noise on a low/mid frequency sine wave that you won't hear on any other sound, so maybe it is better to have a user-adjustable "quality" setting for what gets interpolated. If someone is going to be playing big piano and orchestral samples with no processing, they can switch to the lowest setting (so best if it's not actually labelled "quality"!) or offer an unprocessed playback option like HALion does. Paul. _____________________________ m a x i m | digital audio http://mda-vst.com _____________________________ |
From: Steve H. <S.W...@ec...> - 2002-11-16 08:15:43
|
On Sat, Nov 16, 2002 at 12:48:20 +0100, Frank Neumann wrote: > Please find the following files for analysis: > http://lakai.sf.net/sinetest.wav.gz (the sample, 440 KByte packed) > http://lakai.sf.net/sine1_attack.png (a screenshot of a close look at the > attack phase) > http://lakai.sf.net/sine2_release.png (the same for the release phase) Thanks, thats great. > If you need more analysis, let me know; however, I'll be on a company trip > (to Bristol - yeah, UK, unfortunately not So'ton :-}) from Sunday - > Thursday, so don't expect answer before Friday. Shame up in that part of the cournty quite often, but not this week. Cheers, Steve |
From: Josh G. <jg...@us...> - 2002-11-16 03:22:30
|
On Fri, 2002-11-15 at 13:18, Matthias Weiss wrote: > On Wed, Nov 13, 2002 at 10:35:51PM -0800, Josh Green wrote: > > > > consist of samples, instruments, presets, zones etc, there are quite a > > lot of them. Thats a lot of wasted space just for a lock. I guess I'll > > have to look into some other threading libraries, perhaps just go > > pthread or something. The ideal would be a recursive R/W lock. I've seen > > these in the pthread library, but I'm not sure if they are recursive. > > AFAIK there are no r/w locks in the current linuxthread implementation. > I read about plans to include those in connection with the NPT (native > posix threads). > > matthias > What about pthread_rwlock_t (you can see this in /usr/include/pthread.h). I notice that the function prototypes are surrounded by an "#ifdef __USE_UNIX98" which I'm unfamiliar with. I can do a "p sizeof (pthread_rwlock_t)" in gdb with a pthread linked program, so it seems to be available (32 bytes a mutex, not bad I guess). Cheers. Josh Green |
From: Josh G. <jg...@us...> - 2002-11-16 03:12:58
|
I was just having a few thoughts concerning different implementations of synth primitives (envelopes, LFOs, wavetable interpolation, filters, etc). It seems like if LinuxSampler is going to be modular (or JIT based) there could be several ways to do things for different CPU/quality trade offs. It might even be nice to have very expensive algorithms created (6+ point interpolation or something :) This could be useful as one could create a MIDI sequence of a song, compose it with less CPU hungry synth primitives and then have the option of rendering the final piece in non-realtime using all the highest quality primitives :) So it seems building the modular framework is probably the most important stage, no? Or is the current focus to create something that just works? Anyways, just some thoughts. Cheers. Josh Green |
From: Frank N. <bea...@we...> - 2002-11-15 23:55:20
|
Hi, Steve wrote: > BTW do we know for sure than samplers use exponential envelopes? I guess we > need linear ones too, but they are easy to implement. We should probably > get some recordings from samples of high freq sinewaves with envelopes. > I think Frank N. has done this allready for the S2000, Frank are you on > the list? Yes, I'm here :-). I just made a short example of this for you; I hope this is what you were asking for, I have a hard time trying to catch up with all the mailing lists :-). I loaded a sine wave into the sampler, set the program's attack and release times to 0 and played a couple of high-pitched notes, sampled this into the PC (44.1 kHz) and had a closer look at the attack and release phase of the sample (for analysis, Conrad Parker's "sweep" is highly recommended; I just learnt recently that you can even vertically zoom into a sample by using Shift+Cursor up/down, which helps a lot in finding the exact starting point of a sample :-) Please find the following files for analysis: http://lakai.sf.net/sinetest.wav.gz (the sample, 440 KByte packed) http://lakai.sf.net/sine1_attack.png (a screenshot of a close look at the attack phase) http://lakai.sf.net/sine2_release.png (the same for the release phase) It was interesting for me to discover that the release phase is always faded out somewhat exponentially (and "rather slowly", taking about 2.2ms), while the attack phase is very short; looking _very_ close at it, I'd say it's about 28 samples long, ~0.6 ms. But both are _not zero_ (so really "hard clicks" can never occur). That's the same behaviour I'd expect from any software instrument, too. If you need more analysis, let me know; however, I'll be on a company trip (to Bristol - yeah, UK, unfortunately not So'ton :-}) from Sunday - Thursday, so don't expect answer before Friday. Greetings, Frank |
From: Matthias W. <mat...@in...> - 2002-11-15 21:22:50
|
On Wed, Nov 13, 2002 at 10:35:51PM -0800, Josh Green wrote: > > consist of samples, instruments, presets, zones etc, there are quite a > lot of them. Thats a lot of wasted space just for a lock. I guess I'll > have to look into some other threading libraries, perhaps just go > pthread or something. The ideal would be a recursive R/W lock. I've seen > these in the pthread library, but I'm not sure if they are recursive. AFAIK there are no r/w locks in the current linuxthread implementation. I read about plans to include those in connection with the NPT (native posix threads). matthias |
From: Paul K. <pau...@ma...> - 2002-11-15 14:15:44
|
Steve Harris <S.W...@ec...> wrote: > > Benno and I were discussing envelope generation last night, I think that > the right way to generate an exponential envelope (I checked some synths > outputs too and it looks like this is that way its done) is to feed a > constant value into a LP filter with different parameters for each stage. Yes, so the envelope level tends exponentially to a target level. Where this gets complicated is the attack, which should have a target level maybe 1.5 times it's end level, otherwise you spend a long time at nearly full volume waiting for the decay to start. DLS specifies the attack should be linear not exponental, and I tend to agree with that - for short attacks it doesn't sound any different, but for long attacks an exponential curve gets too loud too soon. Some softsynths now have much more complicated envelopes, with a time, target level and curve (variable between exp/lin/log) for each stage, but it's important to let the user set up a simple ADSR if that is all that's needed. > I suspect that, in general you need to calculate the amplitude of the > envelope for each sample, to avoid stairstepping (zipper noise). Yes, or use short linear segments, and update the envelope every 32 samples for example (64 samples is too long, and people will complain about the resolution). > Example code: > > env = env_input * ai + env * a; May be faster with one multiplication: env += ai * (env_input - env); If you allow real-time control of envelope times/rates, counting down the time to the next stage can get complicated, so it might be better to trigger the next stage when you reach a certain level. Here is some nasty code that does it that way, so env_rate could be modulated in real-time (but to be able to modulate the sustain_level, you would have to make env_target a pointer). //initialize: env = 0.0f; env_rate = ATTACK_RATE; env_thresh = 1.0f; env_target = 1.5f; //else we will never reach threshold //per sample: env += env_rate * (env_target - env); if(env > env_thresh) //end of attack { env_rate = DECAY_RATE; env_thresh = env; //move threshold out of the way env_target = SUSTAIN_LEVEL; //could set a flag so this block is skipped in future } //note off: env_rate = RELEASE_RATE; env_target = 0.0f; //kill the voice before this de-normals! Paul. _____________________________ m a x i m | digital audio http://mda-vst.com _____________________________ |
From: Paul K. <pau...@ma...> - 2002-11-15 14:15:44
|
Benno Senoner <be...@ga...> wrote: > > > If each effect can be chained into the input of the next effect, one > > routable FX send may be enough. > > Sorry I am no expert here, but taking the usual reverb, chorus case I > don't think they can be chained , can they ? I think even some GM modules (Roland?) let you send some of the chorus output into the reverb. This is what I meant... instead of the effects being in series (which can also be useful, depending on the effects) the first effect is sent to the output *and* has a send level into the next effect. One example of where series effects (with no way of routing the signal from one effect to another) is not good enough is delay and reverb: You hear the signal with reverb, but then the delay repeats are dry - this sounds silly! > I think separate send and insert FXes should be supported. > They can be either stereo or mono (probably for insert FXes it makes > sense to keep them mono in the cases the sample sources are mono). ...except you might have a mono sample, but the pan of each voice might be random or track the keyboard, so unless you are applying effects to each voice individually (something the Native Instruments Kontakt sampler allows) inserts need stereo busses. Paul. _____________________________ m a x i m | digital audio http://mda-vst.com _____________________________ |
From: Benno S. <be...@ga...> - 2002-11-15 13:04:09
|
On Fri, 2002-11-15 at 13:31, Steve Harris wrote: > > I dont understand the question, the envelope shape is dynamic depending on > the length of the appropriate section (attack, decay, etc). The only thing > that needsa to be selected is the fudge factor (4.0 ish IIRC) which > determines epsilon for the envelope. Ok. > > Well, the code for this is created on demand right? (a JIT sampler ;) > So if there is no envelope there is no envelope code. If there is an > envelope, then when we have reached epsilon, the envelope has (in theory) > come to an end, we can fade to 0.0 to make sure, and then the voice has > finished. But I meant a voice that can be dyncamically enveloped at runtime where the voice sucks up less CPU when no enveloping is active. > > BTW do we know for sure than samplers use exponential envelopes? I guess we > need linear ones too, but they are easy to implement. We should probably > get some recordings from samples of high freq sinewaves with envelopes. > I think Frank N. has done this allready for the S2000, Frank are you on > the list? Since I'm a bit CPU-power paranoid at this design stage, I thought it it really make sense to use the exponentials. I mean: why not use only linear envelopes and "emulate" exponentials with a few linear segments. The big advantage of linear that it costs us only one addidional addition whiche your exponential code is more expensive. with linear envelopes we have: while(1) { output_sample(pos); sample_pos += pitch; pitch += pitch_delta; volume += volume_delta; } These two _delta additions add very little to the total CPU cost but provide us with good flexibility. When no pitch/volume enveloping is occurring, simply set pitch_delta and volume_delta to 0 When applying modulation generate events that set the *_delta variables in appropriate intervals. Assume we want to emulate exponentials: generate events at let's say 1/8 samplingrate and I think it is probably impossible to distinguish the approximation from the exact curve. (we are talking of relatively "slow" volume and pitch modulation). For example assume we process MIDI pitch bend events with the linear interpolator. It is probably the easiest to keep track of the previous pitch value and then simply interpolate linearly to the new value within a let's say 0.5 - 1 msec time frame. I think this would smooth out things nicely, right ? (Steve, others, do you agree ?) > > Yes. Though you can get away without it if you know the cutoff isn;t > chaging too rapidly (eg. a slow LFO). Ok but assume these nice LP filter wah-wah effects. The LFO frequency in this case is up to a few Hz, but the modulation frequency (filter coefficient change rate) ? How low can it go so that zipper noise can be avoided ? > > > What would a good tradeoff for achieving low-CPU usage zippernoise-free > > filter modulation would be ? (some kind of interpolation of precomputed > > coefficients ?) > > I think so. Not all filters like you doing this, but most are OK IIRC. > I have some SVF and biquad test code around, so I can test this if you > like. I think my LADSPA SVF uses linear interp. to smooth the coefficients. ok let us know. Benno. -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-15 12:32:34
|
On Fri, Nov 15, 2002 at 01:12:42 +0100, Benno Senoner wrote: > Hi, > the shapes do look nice, I was wondering about two things: > What the ideal coefficients would be so that very fast envelopes can > be achieved while slower ones are smoothed out so that no zipper noise > is audibile. I dont understand the question, the envelope shape is dynamic depending on the length of the appropriate section (attack, decay, etc). The only thing that needsa to be selected is the fudge factor (4.0 ish IIRC) which determines epsilon for the envelope. > BTW since in most of cases (in absence of envelopes) the LP smoothing is > not needed, it would be wise to skip that code (to save cycles) when the > envelope reached the final point (up to a very small epsilon since we > are talking of exponentials). What do you suggest here ? Well, the code for this is created on demand right? (a JIT sampler ;) So if there is no envelope there is no envelope code. If there is an envelope, then when we have reached epsilon, the envelope has (in theory) come to an end, we can fade to 0.0 to make sure, and then the voice has finished. BTW do we know for sure than samplers use exponential envelopes? I guess we need linear ones too, but they are easy to implement. We should probably get some recordings from samples of high freq sinewaves with envelopes. I think Frank N. has done this allready for the S2000, Frank are you on the list? > Regarding adding a let's say resonant LP filter to the sample output: > I guess the coefficients need to be smoothed out in some way too, > otherwise zipper noise will probably show up again. Yes. Though you can get away without it if you know the cutoff isn;t chaging too rapidly (eg. a slow LFO). > What would a good tradeoff for achieving low-CPU usage zippernoise-free > filter modulation would be ? (some kind of interpolation of precomputed > coefficients ?) I think so. Not all filters like you doing this, but most are OK IIRC. I have some SVF and biquad test code around, so I can test this if you like. I think my LADSPA SVF uses linear interp. to smooth the coefficients. - Steve |
From: Benno S. <be...@ga...> - 2002-11-15 12:00:59
|
Hi, the shapes do look nice, I was wondering about two things: What the ideal coefficients would be so that very fast envelopes can be achieved while slower ones are smoothed out so that no zipper noise is audibile. BTW since in most of cases (in absence of envelopes) the LP smoothing is not needed, it would be wise to skip that code (to save cycles) when the envelope reached the final point (up to a very small epsilon since we are talking of exponentials). What do you suggest here ? Regarding adding a let's say resonant LP filter to the sample output: I guess the coefficients need to be smoothed out in some way too, otherwise zipper noise will probably show up again. I've found these interesting msgs on the saol-users list but I do not have a deep understanding of the things mentioned in these mails. http://sound.media.mit.edu/mpeg4/saol-users/0179.html http://sound.media.mit.edu/mpeg4/saol-users/0178.html What would a good tradeoff for achieving low-CPU usage zippernoise-free filter modulation would be ? (some kind of interpolation of precomputed coefficients ?) Benno On Thu, 2002-11-14 at 13:28, Steve Harris wrote: > Hi all, > > Benno and I were discussing envelope generation last night, I think that > the right way to generate an exponential envelope (I checked some synths > outputs too and it looks like this is that way its done) is to feed a > constant value into a LP filter with different parameters for each stage. > -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-14 14:44:49
|
I'm assuming you meant to apply to the list... On Thu, Nov 14, 2002 at 02:37:54 +0000, Nathaniel Virgo wrote: > > For reverb and chrous I think you want them to be serial, parallel: > > > > .----> chorus ---. > > > > -----+ +------> > > > > '----> reverb ---' > > > > will sound odd I think. > > On my aging Yamaha CS1x keyboard (an XG thing with lots of "analogue-style" > sounds and built in fx) they are in parallel on most of the presets. You can > put them in series (kind of) but it makes the reverb take up more of the mix. OK, well that answers that then. Sure, this can be configurable, we have to support both anyway. And the code will be build dynamically, so it wont really hurt speed. > Why not allow the user to set up the routing however they want it? Perhaps > you could simplify things a lot by letting the user send each voice to either > a mono JACK output or a stereo pair, and do effects routing in something like > Ardour. Or would that be inefficient/impractical/at odds with the aims of > this project? One of the (eventual) aims is to build optimal optimal code paths for the effects routing, to get the voice count as high / cpu load as low as possible, that kind of rules out external processing, and it would be problematic anyway and the number of active voices varies from block to block. PS I was vaguely worried about the overhead from having to use position independent code in the recompiled voices, but it turns out to only be a few percent overhead on my PIIIM (which has terrible rspeed4 benchmark performance BTW, >100 cycles for the last test). Benno, you could add -fPIC -DPIC to the CFLAGS if you want to account for this in your benchmarks. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-14 13:54:18
|
On Thu, Nov 14, 2002 at 03:52:15 +0100, Benno Senoner wrote: > This means the benchmarks I posted in my previous mail > (284 voices on P4 1.8GHz , 331 voices on Athlon 1.4GHz) are meant with 2 > different FX sends on a per-voice basis. With per-MIDI-channel the > performance is probably around 500-600 voices on the same boxes. This > means that there will be plenty room for running the actual FXes and > providing additional insert FXes like LP filters etc. I think these benchamrks are optomistic, but for very simple voices it may be approachable. > > If each effect can be chained into the input of the next effect, one > > routable FX send may be enough. > > Sorry I am no expert here, but taking the usual reverb, chorus case I > don't think they can be chained , can they ? For reverb and chrous I think you want them to be serial, parallel: .----> chorus ---. | | -----+ +------> | | '----> reverb ---' will sound odd I think. - Steve |
From: Steve H. <S.W...@ec...> - 2002-11-14 13:44:28
|
On Thu, Nov 14, 2002 at 12:28:02 +0000, Steve Harris wrote: > Example code: > Theres a graph of the output here: http://inanna.ecs.soton.ac.uk/~swh/foo.png - Steve |
From: Benno S. <be...@ga...> - 2002-11-14 13:41:21
|
Paul Kellet wrote: > Mono FX sends / stereo FX returns is the most common configuration - > on hardware samplers and synths, and on all but the biggest analogue > mixers. Ok, but what does this mean for the the audible result of a panned mono signal processed by a FX with mono sends ? Eg only the try part gets panned while the wet part is still centered in them middle ? Is this the audible result ok or does it sound bad in extreme panning positions ? I'd prefer to use mono sends (as default, stereo will be possible too) if that is the standard since it will save us some CPU cycles which helps to increase polyphony. > But hardware synths and samplers will usually have less FX sends than > they have FX busses, so you set the send level and destination for > each source. Yes of course. Since we use the recompiler we can create as many FX sends per voice as we wish. I AFAIK the usual GM MIDI standard has two sends for each MIDI channel.(reverb and chorus). The flexible nature of linuxsampler will allow arbitrary per-voice dynamically routable FX sends but in most cases this will not be needed since when implementing a MIDI sampler we can simply mix all voices on the same channel and then send the result to the FX since all voices belonging to the same channel share the same FX send level. This saves a lot of CPU. This means the benchmarks I posted in my previous mail (284 voices on P4 1.8GHz , 331 voices on Athlon 1.4GHz) are meant with 2 different FX sends on a per-voice basis. With per-MIDI-channel the performance is probably around 500-600 voices on the same boxes. This means that there will be plenty room for running the actual FXes and providing additional insert FXes like LP filters etc. > If each effect can be chained into the input of the next effect, one > routable FX send may be enough. Sorry I am no expert here, but taking the usual reverb, chorus case I don't think they can be chained , can they ? > But if you want to support "insert" effects like compression, these > should be on stereo busses. So the question turns into: do we have > one sort of effect and use stereo FX busses, or do we have separate > send and insert FX (or just have FX sends, and insert effects can be > applied to outputs). I think separate send and insert FXes should be supported. They can be either stereo or mono (probably for insert FXes it makes sense to keep them mono in the cases the sample sources are mono). Can the concept of per channel FXes the case of MIDI devices be applied to inserts too ? I guess yes. (eg let's say on midi chan 1 we have a polyphonic synth sound and we want to use an insert FX (a lowpass) to process the sound. In that case we can simply apply the FX to the channel mix buffer, right ? Benno -- http://linuxsampler.sourceforge.net Building a professional grade software sampler for Linux. Please help us designing and developing it. |
From: Steve H. <S.W...@ec...> - 2002-11-14 12:28:09
|
Hi all, Benno and I were discussing envelope generation last night, I think that the right way to generate an exponential envelope (I checked some synths outputs too and it looks like this is that way its done) is to feed a constant value into a LP filter with different parameters for each stage. I suspect that, in general you need to calculate the amplitude of the envelope for each sample, to avoid stairstepping (zipper noise). I expect Paul Kellett knows the right approach, so should be able to say if we're barking up the wrong tree. Example code: #include <math.h> #include <stdio.h> #define EVENTS 5 #define ENV_NONE 0 #define ENV_ATTACK 1 #define ENV_DECAY 2 #define ENV_SUSTAIN 3 #define ENV_RELEASE 4 void lp_set_par(double time, double *a, double *ai) { *a = exp(-5.0 / time); // The 5.0 is a fudge factor *ai = 1.0 - *a; } int main() { unsigned int event_time[EVENTS] = {0, 100, 200, 400, 900}; unsigned int event_action[EVENTS]= {ENV_ATTACK, ENV_DECAY, ENV_SUSTAIN, ENV_RELEASE, ENV_NONE}; unsigned int i, event = 0; float env_input = 0.0f; double env = 0.0f; double a, ai; float attack_level = 1.0f; float sustain_level = 0.5f; float release_level = 0.0f; for (i=0; i<1000; i++) { if (i == event_time[event]) { switch (event_action[event]) { case ENV_ATTACK: env_input = attack_level; break; case ENV_DECAY: env_input = sustain_level; break; case ENV_SUSTAIN: env_input = sustain_level; break; case ENV_RELEASE: env_input = release_level; break; } lp_set_par((double)(event_time[event+1] - event_time[event]), &a, &ai); event++; } env = env_input * ai + env * a; printf("%g\n", env); } return 0; } |
From: Paul K. <pau...@ma...> - 2002-11-14 11:30:32
|
Benno Senoner <be...@ga...> wrote: > > I have an important question regarding the effect sends: (since I am not > an expert here) Are FXes in soft samplers/synths usually stereo or mono ? > > The CPU power for two mono sends is about the same for one single stereo > send so I was just wondering which way we should go initially. (mono I > guess ?). Mono FX sends / stereo FX returns is the most common configuration - on hardware samplers and synths, and on all but the biggest analogue mixers. But hardware synths and samplers will usually have less FX sends than they have FX busses, so you set the send level and destination for each source. If each effect can be chained into the input of the next effect, one routable FX send may be enough. But if you want to support "insert" effects like compression, these should be on stereo busses. So the question turns into: do we have one sort of effect and use stereo FX busses, or do we have separate send and insert FX (or just have FX sends, and insert effects can be applied to outputs). Paul. _____________________________ m a x i m | digital audio http://mda-vst.com _____________________________ |
From: Josh G. <jg...@us...> - 2002-11-14 06:35:20
|
On Wed, 2002-11-13 at 13:16, Matthias Weiss wrote: > Hi Josh! > > First of all, very interesting idea! > > On Wed, Nov 13, 2002 at 12:35:26AM -0800, Josh Green wrote: > > The way I'm currently thinking of things is that libInstPatch could act > > as a patch "server" and would contain the loaded patch objects. > > Why is it necessary that those patch server run in their own process? > Why not simply add this functionality to the nodes themself? > > > size/name or perhaps MD5 on the parameter data). In the case of the GUI, > > I think it needs to have direct access to the patch objects (so anything > > it is editing should be locally available, either being served or > > synchronized to another machine). > > Hm, would that mean the GUI has to keep a copy of the state data? I > think the state data (modulation parameters, etc.) should only be kept > by the nodes, because those are the ones who need the data always > available. > > It would look something like that: > > GUI1 <-->libInstPatch.so/LinuxSampler1 > ^ ^ > | | > | +- Vintage Dreams SF2 (Master) > | +- Lots of Foo.DLS (Slave) > | +- Wide Load.gig > | > | > +---->libInstPatch.so/LinuxSampler2 > | ^ > | | > ... +-->GUI2 > > With this scheme, libInstPatch would be a shared library that handles > (to the outside world) the peer communication as well as the communication > with the GUI and controls the state changes to the inside world (sample > engine paramters). The state data would have to be kept one time per > node, no duplication in the patch server and GUI. > Thats actually what I meant. I was using the term "server" in reference to the patch objects being synchronized between multiple clients, I was assuming that it would be a shared library (it is currently like this). So, no, the GUI would not need a separate copy of the data. > > GUI and LinuxSampler would not necessarily be communicating with the > > same libInstPatch server, they could be on separate synchronized > > machines. > > What's the advantage of this? > I think someone mentioned something about being able to edit patches on one machine but actually sequencing it on another. That was what I was referring to. Your diagram above is an example of that :) > > Anyways, thats how I see it. Does this make sense? Cheers. > > To me the idea is great and makes a lot of sense ;-). > > matthias > > Thanks for the encouragement. Unfortunately I'm finding that my current thread implementation with libInstPatch is less than perfect. I'm using the glib GStaticRecMutex to lock individual patch objects, but just recently realized that each mutex requires 40 bytes! Since patch objects consist of samples, instruments, presets, zones etc, there are quite a lot of them. Thats a lot of wasted space just for a lock. I guess I'll have to look into some other threading libraries, perhaps just go pthread or something. The ideal would be a recursive R/W lock. I've seen these in the pthread library, but I'm not sure if they are recursive. libInstPatch is going to need a bit of work. I have faith in the architecture, but lots of debugging and optimization is in order :) Cheers. Josh Green |