You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(27) |
Nov
(120) |
Dec
(16) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(65) |
Feb
(2) |
Mar
(53) |
Apr
(15) |
May
|
Jun
(19) |
Jul
(8) |
Aug
(35) |
Sep
(17) |
Oct
(70) |
Nov
(87) |
Dec
(94) |
| 2004 |
Jan
(133) |
Feb
(28) |
Mar
(45) |
Apr
(30) |
May
(113) |
Jun
(132) |
Jul
(33) |
Aug
(29) |
Sep
(26) |
Oct
(11) |
Nov
(21) |
Dec
(60) |
| 2005 |
Jan
(108) |
Feb
(153) |
Mar
(108) |
Apr
(44) |
May
(72) |
Jun
(90) |
Jul
(99) |
Aug
(67) |
Sep
(117) |
Oct
(38) |
Nov
(40) |
Dec
(27) |
| 2006 |
Jan
(16) |
Feb
(18) |
Mar
(21) |
Apr
(71) |
May
(26) |
Jun
(48) |
Jul
(27) |
Aug
(40) |
Sep
(20) |
Oct
(118) |
Nov
(69) |
Dec
(35) |
| 2007 |
Jan
(76) |
Feb
(98) |
Mar
(26) |
Apr
(126) |
May
(94) |
Jun
(46) |
Jul
(9) |
Aug
(89) |
Sep
(18) |
Oct
(27) |
Nov
|
Dec
(49) |
| 2008 |
Jan
(117) |
Feb
(40) |
Mar
(18) |
Apr
(30) |
May
(40) |
Jun
(10) |
Jul
(30) |
Aug
(13) |
Sep
(29) |
Oct
(23) |
Nov
(22) |
Dec
(35) |
| 2009 |
Jan
(19) |
Feb
(39) |
Mar
(17) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(11) |
Sep
(1) |
Oct
(46) |
Nov
(13) |
Dec
(5) |
| 2010 |
Jan
(21) |
Feb
(3) |
Mar
(2) |
Apr
(7) |
May
(1) |
Jun
(26) |
Jul
(3) |
Aug
(10) |
Sep
(13) |
Oct
(35) |
Nov
(10) |
Dec
(17) |
| 2011 |
Jan
(26) |
Feb
(27) |
Mar
(14) |
Apr
(32) |
May
(8) |
Jun
(11) |
Jul
(4) |
Aug
(7) |
Sep
(27) |
Oct
(25) |
Nov
(7) |
Dec
(2) |
| 2012 |
Jan
(20) |
Feb
(17) |
Mar
(59) |
Apr
(31) |
May
|
Jun
(6) |
Jul
(7) |
Aug
(10) |
Sep
(11) |
Oct
(2) |
Nov
(4) |
Dec
(17) |
| 2013 |
Jan
(17) |
Feb
(2) |
Mar
(3) |
Apr
(4) |
May
(8) |
Jun
(3) |
Jul
(2) |
Aug
|
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
| 2014 |
Jan
(6) |
Feb
(26) |
Mar
(12) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(6) |
Aug
(6) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
(9) |
Feb
(5) |
Mar
(4) |
Apr
(9) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(3) |
| 2016 |
Jan
(2) |
Feb
(4) |
Mar
(5) |
Apr
(4) |
May
(14) |
Jun
(31) |
Jul
(18) |
Aug
|
Sep
(10) |
Oct
(3) |
Nov
|
Dec
|
| 2017 |
Jan
(39) |
Feb
(5) |
Mar
(2) |
Apr
|
May
(52) |
Jun
(11) |
Jul
(36) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(10) |
Dec
(8) |
| 2018 |
Jan
(3) |
Feb
(4) |
Mar
|
Apr
(8) |
May
(28) |
Jun
(11) |
Jul
(2) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
(25) |
| 2019 |
Jan
(12) |
Feb
(50) |
Mar
(14) |
Apr
(3) |
May
(8) |
Jun
(17) |
Jul
(10) |
Aug
(2) |
Sep
(21) |
Oct
(10) |
Nov
|
Dec
(28) |
| 2020 |
Jan
(4) |
Feb
(10) |
Mar
(7) |
Apr
(16) |
May
(10) |
Jun
(7) |
Jul
(2) |
Aug
(5) |
Sep
(3) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2021 |
Jan
|
Feb
(5) |
Mar
(13) |
Apr
(13) |
May
(7) |
Jun
|
Jul
(1) |
Aug
(11) |
Sep
(12) |
Oct
(7) |
Nov
(26) |
Dec
(41) |
| 2022 |
Jan
(23) |
Feb
|
Mar
(8) |
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(3) |
Nov
(1) |
Dec
(1) |
| 2023 |
Jan
|
Feb
(5) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(11) |
Sep
(5) |
Oct
(1) |
Nov
|
Dec
|
| 2024 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(10) |
Dec
|
| 2025 |
Jan
|
Feb
(4) |
Mar
(1) |
Apr
(2) |
May
|
Jun
(17) |
Jul
(1) |
Aug
(4) |
Sep
(7) |
Oct
(1) |
Nov
(8) |
Dec
|
|
From: Robert J. <rob...@da...> - 2003-11-04 12:15:43
|
Hi, > > yes GMPI will be very cool, finally a free VSTi ? :-) > I heard Steinberg is not partecipating (boycotting it ?), I guess > because VST ist the de-facto standard and it is advantageous for them > controlling the format possibly giving them an edge over the competition. > Sad .... , I always heard "opensource leads to fragmentation" .... but > to me the Windows audio world seems much more fragmented. > VST, DirectX, RTAS, EASI, GSIF, Rewire .... > on linux things do look much better: jack and LADSPA. > I think an important API that is still missing is a sort of VSTi, > would it better to extend LADSPA to support MIDI in too or is it better > to wait for GMPI ?=20 Getting off topic but this is fun :) MusE actually has an internal plugin interface like that. LADSPA +=20 Alsa-sequencer. It's a lightweight solution that works quite well. It has o= ne=20 major drawback though, it can't support non-realtime rendering, not to my=20 knowledge atleast. > I think if we had a VSTi-like API it would lead to=20 > a big proliferation of soft-synth and sampler plugins. > But for now we have to write standalone apps that output the audio via ja= ck > and perform MIDI input via the ALSA interface. I hardly know all about this so I'm probably missing a lot of intricate=20 problems... I do however have another view of this... =46or long term (say 2-3 years from now) I think the Jack solution with sep= arate=20 apps that talk to jackd could actually be even more powerful than a plugin= =20 api. What is needed however is a working session handling system that can launch= =20 the connection graph. LADCCA is a step in the right direction, I think it=20 needs to do more though... > (which is nice but does not offer synchronous transfer, sample accuracy > etc) I'm not 100% what this means, are you saying it wouldn't be possible to sol= ve=20 this within the realms of jack ? Assuming it's not possible now... What I see as the major reason for using plugins as opposed to jack apps is= =20 that they require less CPU overhead. (That they are in-proc is just a=20 technicality in my opinion.) The drawback is that there are all kinds of painful stuff that needs to be= =20 taken into account, stability, gui-toolkits, memory handling perhaps,=20 possibly others... Extrapolating into the future, CPU overhead (for ipc) is bound to become a= =20 lesser and lesser problem... when CPU is no longer a problem we invent new= =20 abstractions, in my mind Jack is such an abstraction. An interesting question would be how much of the work that goes into GMPI=20 (which I understand has major brain power behind it ;) would be applicable = to=20 a jack-like api ? > > > On Monday 03 November 2003 18.57, Mark Knecht wrote: > > [...] > > > > > If the conversion is not part of LS, then what's the additional > > > latency incurred when playing a keyboard live? How long are MIDI > > > event's delayed going through a completely separate app? Is it > > > deterministic, or will it vary from event to event? > > > > If you have an additional process that must be scheduled to process > > every event, there is a significant risk of increased worst case > > latency. I'm not sure how likely it is that an event is hit by two > > "slow" reschedules (one in the MIDI processor and one in LS), but > > it's most probably possible. > > Same applies to JACK but I think with a good lowlatency kernel > the additional latency is measured in 50-100usec max. > This means writing an ALSA user space midi router pratically does not > degrade the MIDI timing. Interesting. I got intrigued by this and started poking at some code last=20 night, we'll see if I can make something useful out of it. I was thinking along the lines of a general purpose midi-filter that could = do=20 lots of stupid stuff: =2D velocity expand =2D velocity compand =2D delay =2D arpeggio =2D randomize =2D split =2D whatever /Robert - <who is putting up his blast shield> > I=B4ve seen keyboards that have builtin sequencers that run with a > 500Hz (2msec period) timer and the MIDI files sound very well. > So the ALSA user space router is absolutely not a problem. > > > RT-Dave, the control engineer, would assume this *will* happen > > occasionally, effectively doubling the worst case latency, until > > proven wrong. ;-) > > I=B4ll do some latency graphs with jack + jack client when adding > jack support to LS so we can measure if direct ALSA output vs > jack output. > I think with the right lowlatency kernel jack output a 3msec can > be done reliably and that is that LS needs for tight note-on response. > > > There most certainly will be an increase in minimum and average > > latency, but as long as event processing is done "instantly" (by > > blocking on input and sending the resulting events instantly; no > > timers and stuff in the MIDI processor), that effect should be > > insignificant. (Microseconds...) > > Exactly ALSA userspace MIDI routers are implemented using > poll() so they block until a MIDI event arrives. > This means they respond instantly (minus the context switch time) > > > [...] > > > > > On the other hand, since almost all of my MIDI goes through the > > > Alsa stack somehow, and I view connections with kaconnect, could > > > that be a place to put these velocity modifiers? > > > > Well, that was my first though when I started following this thread - > > but unfortunately, ALSA runs only on Linux. (And QNX, though I have > > no idea what state that stuff is in nowadays.) > > Don=B4t worry ALSA=B4s MIDI timing is excellent, no QNX needed. > > > It would be nicer IMHO, if things like the "MIDI corrector" could use > > some portable plugin API - but OTOH, it can't be all that hard to > > port it to various APIs. No big deal. What's important is that it > > runs at the right place in the chain, and that it doesn't add > > significant latency. > > Of course a builtin MIDI corrector is better (eg the table lookup) but > the ALSA userspace router is ok too. > > Anyway it is just a waste of time discussing about the midi corrector > stuff, we have much bigger problems, getting looping, enveloping, LFOs and > articulation working. > > David: I told Christian we should implement a simple sample accurate event > system in LS right from start because it will avoid us many troubles > later. For example we can use the event system to do fast enveloping > (lists of linear segments, this means sample accurate ramping and very fa= st > because you only need to increment the pitch value (pitch modulation) > and/or volume value (volume modulation) by a delta amount. > Exponential curves can be approximated by a succession of linear segments > and you could even modulate the pitch/volume in an arbitrary way by sendi= ng > events with a frequency of eg. samplerate/4 which would still be very fas= t. > > With current ALSA MIDI IN we cannot yet exploit the sample accurate event > system fully but if something like VSTi for Linux comes out LS will be > ready for sample accurate MIDI events. > Not to mention that we can lower the current realtime MIDI IN jitter when > LS is played live thanks of delaying the midi in events based on the > timestamp (we run the MIDI IN thread with higher priority than the audio > thread thus MIDI in timing can have sub fragment-time precision). > > When some event code will be available in LS I=B4d like you David to revi= ew > it a bit since you are the timestamped-events master :-) > > BTW: the switch() statement seems faster than function pointers since > it constructs a jumping table and does not need to save the return > address on the stack. > I think switch() will be ideal in the audio rendering routine to select > various rendering functions, eg. sample playback with linear interpolatio= n, > cubic, with and without filter etc. > > > cheers, > Benno > http://www.linuxsampler.org > > > ------------------------------------------------- > This mail sent through http://www.gardena.net > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > Does SourceForge.net help you be more productive? Does it > help you create better code? SHARE THE LOVE, and help us help > YOU! Click Here: http://sourceforge.net/donate/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: Steve H. <S.W...@ec...> - 2003-11-04 11:45:31
|
On Tue, Nov 04, 2003 at 12:32:45 +0100, be...@ga... wrote: > I think an important API that is still missing is a sort of VSTi, > would it better to extend LADSPA to support MIDI in too or is it better > to wait for GMPI ? I think if we had a VSTi-like API it would lead to > a big proliferation of soft-synth and sampler plugins. I think its better to wait for GMPI, theres no obvious way to extend LADSPA to handle instruments, so it would probably end up like VSTi *shudder*. In the meantime jack+alsa sequencer works pretty well for anything bigger than a trivial monosynth. - Steve |
|
From: <be...@ga...> - 2003-11-04 11:32:43
|
Scrive David Olofson <da...@ol...>: > > > > Well I agree that broken velocity curves in MIDI keyboards are not > > the sampler's problem but giving the user the possibility to remap > > the velocity curves can be handy in some situations. > > After all its just a simple table with 128 entries. > > velocity_out=velocity_remap[velocity_in] :-) > > Sure, it's a nice feature, but fixing broken input might not be the > best use for it, at least not when you have sequencers and other stuf > in the setup as well. > > That said, if we had a plugin API that can handle MIDI plugins (like > XAP or GMPI), we could just have the sampler host the "MIDI > corrector" plugin (and all sorts of other event processors and stuff > for that matter), to avoid other hosts and potential latency issues. > When using a sequencer, it would be natural to run the plugin on the > sequencer's inputs instead, so you get "normal" events in the tracks. yes GMPI will be very cool, finally a free VSTi ? :-) I heard Steinberg is not partecipating (boycotting it ?), I guess because VST ist the de-facto standard and it is advantageous for them controlling the format possibly giving them an edge over the competition. Sad .... , I always heard "opensource leads to fragmentation" .... but to me the Windows audio world seems much more fragmented. VST, DirectX, RTAS, EASI, GSIF, Rewire .... on linux things do look much better: jack and LADSPA. I think an important API that is still missing is a sort of VSTi, would it better to extend LADSPA to support MIDI in too or is it better to wait for GMPI ? I think if we had a VSTi-like API it would lead to a big proliferation of soft-synth and sampler plugins. But for now we have to write standalone apps that output the audio via jack and perform MIDI input via the ALSA interface. (which is nice but does not offer synchronous transfer, sample accuracy etc) > On Monday 03 November 2003 18.57, Mark Knecht wrote: > [...] > > If the conversion is not part of LS, then what's the additional > > latency incurred when playing a keyboard live? How long are MIDI > > event's delayed going through a completely separate app? Is it > > deterministic, or will it vary from event to event? > > If you have an additional process that must be scheduled to process > every event, there is a significant risk of increased worst case > latency. I'm not sure how likely it is that an event is hit by two > "slow" reschedules (one in the MIDI processor and one in LS), but > it's most probably possible. Same applies to JACK but I think with a good lowlatency kernel the additional latency is measured in 50-100usec max. This means writing an ALSA user space midi router pratically does not degrade the MIDI timing. I´ve seen keyboards that have builtin sequencers that run with a 500Hz (2msec period) timer and the MIDI files sound very well. So the ALSA user space router is absolutely not a problem. > > RT-Dave, the control engineer, would assume this *will* happen > occasionally, effectively doubling the worst case latency, until > proven wrong. ;-) I´ll do some latency graphs with jack + jack client when adding jack support to LS so we can measure if direct ALSA output vs jack output. I think with the right lowlatency kernel jack output a 3msec can be done reliably and that is that LS needs for tight note-on response. > > There most certainly will be an increase in minimum and average > latency, but as long as event processing is done "instantly" (by > blocking on input and sending the resulting events instantly; no > timers and stuff in the MIDI processor), that effect should be > insignificant. (Microseconds...) Exactly ALSA userspace MIDI routers are implemented using poll() so they block until a MIDI event arrives. This means they respond instantly (minus the context switch time) > > > [...] > > On the other hand, since almost all of my MIDI goes through the > > Alsa stack somehow, and I view connections with kaconnect, could > > that be a place to put these velocity modifiers? > > Well, that was my first though when I started following this thread - > but unfortunately, ALSA runs only on Linux. (And QNX, though I have > no idea what state that stuff is in nowadays.) Don´t worry ALSA´s MIDI timing is excellent, no QNX needed. > > It would be nicer IMHO, if things like the "MIDI corrector" could use > some portable plugin API - but OTOH, it can't be all that hard to > port it to various APIs. No big deal. What's important is that it > runs at the right place in the chain, and that it doesn't add > significant latency. Of course a builtin MIDI corrector is better (eg the table lookup) but the ALSA userspace router is ok too. Anyway it is just a waste of time discussing about the midi corrector stuff, we have much bigger problems, getting looping, enveloping, LFOs and articulation working. David: I told Christian we should implement a simple sample accurate event system in LS right from start because it will avoid us many troubles later. For example we can use the event system to do fast enveloping (lists of linear segments, this means sample accurate ramping and very fast because you only need to increment the pitch value (pitch modulation) and/or volume value (volume modulation) by a delta amount. Exponential curves can be approximated by a succession of linear segments and you could even modulate the pitch/volume in an arbitrary way by sending events with a frequency of eg. samplerate/4 which would still be very fast. With current ALSA MIDI IN we cannot yet exploit the sample accurate event system fully but if something like VSTi for Linux comes out LS will be ready for sample accurate MIDI events. Not to mention that we can lower the current realtime MIDI IN jitter when LS is played live thanks of delaying the midi in events based on the timestamp (we run the MIDI IN thread with higher priority than the audio thread thus MIDI in timing can have sub fragment-time precision). When some event code will be available in LS I´d like you David to review it a bit since you are the timestamped-events master :-) BTW: the switch() statement seems faster than function pointers since it constructs a jumping table and does not need to save the return address on the stack. I think switch() will be ideal in the audio rendering routine to select various rendering functions, eg. sample playback with linear interpolation, cubic, with and without filter etc. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: David O. <da...@ol...> - 2003-11-04 00:43:53
|
On Monday 03 November 2003 18.53, be...@ga... wrote: > Scrive David Olofson <da...@ol...>: > > On Monday 03 November 2003 16.37, Mark Knecht wrote: > > [...] > > > > > Should we have a Linux app, inside of LS or stand-alone, that > > > allows you to get this data from your keyboard and 'tune' LS > > > somehow to get the right dynamics? I think possibly yes.... > > > > I think a tool like that sounds like a useful idea - but it's the > > keyboard that should be "fixed", not the sampler. If the incoming > > MIDI events need to be corrected, I'd strongly prefer it if the > > weird versions never get into sequencers, MIDI processors and the > > like. > > Hi David ! > > Well I agree that broken velocity curves in MIDI keyboards are not > the sampler's problem but giving the user the possibility to remap > the velocity curves can be handy in some situations. > After all its just a simple table with 128 entries. > velocity_out=3Dvelocity_remap[velocity_in] :-) Sure, it's a nice feature, but fixing broken input might not be the=20 best use for it, at least not when you have sequencers and other stuf=20 in the setup as well. That said, if we had a plugin API that can handle MIDI plugins (like=20 XAP or GMPI), we could just have the sampler host the "MIDI=20 corrector" plugin (and all sorts of other event processors and stuff=20 for that matter), to avoid other hosts and potential latency issues.=20 When using a sequencer, it would be natural to run the plugin on the=20 sequencer's inputs instead, so you get "normal" events in the tracks. [...] > PS: This year I want LS with faithful .GIG playback under the > christmas tree, I guess you guys too :-) That wouldn't be too bad. :-) //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: David O. <da...@ol...> - 2003-11-04 00:38:45
|
On Monday 03 November 2003 18.57, Mark Knecht wrote: [...] > If the conversion is not part of LS, then what's the additional > latency incurred when playing a keyboard live? How long are MIDI > event's delayed going through a completely separate app? Is it > deterministic, or will it vary from event to event? If you have an additional process that must be scheduled to process=20 every event, there is a significant risk of increased worst case=20 latency. I'm not sure how likely it is that an event is hit by two=20 "slow" reschedules (one in the MIDI processor and one in LS), but=20 it's most probably possible. RT-Dave, the control engineer, would assume this *will* happen=20 occasionally, effectively doubling the worst case latency, until=20 proven wrong. ;-) There most certainly will be an increase in minimum and average=20 latency, but as long as event processing is done "instantly" (by=20 blocking on input and sending the resulting events instantly; no=20 timers and stuff in the MIDI processor), that effect should be=20 insignificant. (Microseconds...) [...] > On the other hand, since almost all of my MIDI goes through the > Alsa stack somehow, and I view connections with kaconnect, could > that be a place to put these velocity modifiers? Well, that was my first though when I started following this thread -=20 but unfortunately, ALSA runs only on Linux. (And QNX, though I have=20 no idea what state that stuff is in nowadays.) It would be nicer IMHO, if things like the "MIDI corrector" could use=20 some portable plugin API - but OTOH, it can't be all that hard to=20 port it to various APIs. No big deal. What's important is that it=20 runs at the right place in the chain, and that it doesn't add=20 significant latency. [...] > Anyway, the best thought that I have right now is that these curves > Cristian asked me to look at are, at least partially, in existence > to handle stuff like this. I might be wrong, but that's all I've > got right now. Yeah, it sure looks like something like that... //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Mark K. <mar...@co...> - 2003-11-03 17:58:07
|
On Mon, 2003-11-03 at 09:10, Robert Jonsson wrote: > Monday 03 November 2003 17.44 skrev David Olofson: > > On Monday 03 November 2003 16.37, Mark Knecht wrote: > > [...] > > > > > Should we have a Linux app, inside of LS or stand-alone, that > > > allows you to get this data from your keyboard and 'tune' LS > > > somehow to get the right dynamics? I think possibly yes.... > > > > I think a tool like that sounds like a useful idea - but it's the > > keyboard that should be "fixed", not the sampler. > > Definitely NOT part of a sampler, maybe not part of the keyboard either. > It would make a quite interesting project on it's own. Some kind of > midi-filter with which you could calibrate and effectuate the midi-traffic. > Maybe it already exists? > > /Robert > OK, it could go either way. I know the prototypical Linux answer is a bunch of small apps working together. However, that might work better for text processing than real-time audio. If the conversion is not part of LS, then what's the additional latency incurred when playing a keyboard live? How long are MIDI event's delayed going through a completely separate app? Is it deterministic, or will it vary from event to event? I think if it's built in, or maybe called in the form of some ladspa plugin that could do this sort of stuff, then I suspect the results are likely to be more consistent, but that's just my thought and not based on real experience. On the other hand, since almost all of my MIDI goes through the Alsa stack somehow, and I view connections with kaconnect, could that be a place to put these velocity modifiers? It would be nice if a single key hit could be reflected out to two different synths with different velocity values. (Think about one curve being 127-MV - now I mix two synths based on how are I play. Could be fun.) I agree the problem is in the keyboard, but there's no way to get those sorts of problems fixed, as far as I know.) I'm not arguing one way or the other on this, I just think there could be some issue like this that Nemesis thought of when they wrote GSt. I prefer to start from a POV that says people are smart and do *most* of what they do because they had good reasons. (And no matter what my emails sound like sometimes, I really do think this way. I ain't half as smart as most people on these lists and don;'t take myself that seriously.) There will be, of course, bugs that creep in to any program, but on the whole GSt is a pretty successful program. We could do a lot worse with LS, even though I hope to do much better! Anyway, the best thought that I have right now is that these curves Cristian asked me to look at are, at least partially, in existence to handle stuff like this. I might be wrong, but that's all I've got right now. - Mark |
|
From: <be...@ga...> - 2003-11-03 17:53:13
|
Scrive David Olofson <da...@ol...>: > On Monday 03 November 2003 16.37, Mark Knecht wrote: > [...] > > Should we have a Linux app, inside of LS or stand-alone, that > > allows you to get this data from your keyboard and 'tune' LS > > somehow to get the right dynamics? I think possibly yes.... > > I think a tool like that sounds like a useful idea - but it's the > keyboard that should be "fixed", not the sampler. If the incoming > MIDI events need to be corrected, I'd strongly prefer it if the weird > versions never get into sequencers, MIDI processors and the like. Hi David ! Well I agree that broken velocity curves in MIDI keyboards are not the sampler's problem but giving the user the possibility to remap the velocity curves can be handy in some situations. After all its just a simple table with 128 entries. velocity_out=velocity_remap[velocity_in] :-) As said Mark should simply figure out the most common velocity curves in GSt so that we can aproximate the curves and build our own velocity to linear volume table. So switching velocity curve is just a matter of switching the table. PS: This year I want LS with faithful .GIG playback under the christmas tree, I guess you guys too :-) chers, Benno http://www.linuxsampler.org > > > //David Olofson - Programmer, Composer, Open Source Advocate > > .- Audiality -----------------------------------------------. > | Free/Open Source audio engine for games and multimedia. | > | MIDI, modular synthesis, real time effects, scripting,... | > `-----------------------------------> http://audiality.org -' > --- http://olofson.net --- http://www.reologica.se --- > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > Does SourceForge.net help you be more productive? Does it > help you create better code? SHARE THE LOVE, and help us help > YOU! Click Here: http://sourceforge.net/donate/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel > ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Robert J. <rob...@da...> - 2003-11-03 17:18:01
|
> > > > I think a tool like that sounds like a useful idea - but it's the > > keyboard that should be "fixed", not the sampler. > > Definitely part of a sampler, maybe not part of the keyboard either. Definitely NOT part of a sampler. |
|
From: Robert J. <rob...@da...> - 2003-11-03 17:11:48
|
Monday 03 November 2003 17.44 skrev David Olofson: > On Monday 03 November 2003 16.37, Mark Knecht wrote: > [...] > > > Should we have a Linux app, inside of LS or stand-alone, that > > allows you to get this data from your keyboard and 'tune' LS > > somehow to get the right dynamics? I think possibly yes.... > > I think a tool like that sounds like a useful idea - but it's the > keyboard that should be "fixed", not the sampler. Definitely part of a sampler, maybe not part of the keyboard either. It would make a quite interesting project on it's own. Some kind of midi-filter with which you could calibrate and effectuate the midi-traffic. Maybe it already exists? /Robert > If the incoming > MIDI events need to be corrected, I'd strongly prefer it if the weird > versions never get into sequencers, MIDI processors and the like. > > > //David Olofson - Programmer, Composer, Open Source Advocate > > .- Audiality -----------------------------------------------. > > | Free/Open Source audio engine for games and multimedia. | > | MIDI, modular synthesis, real time effects, scripting,... | > > `-----------------------------------> http://audiality.org -' > --- http://olofson.net --- http://www.reologica.se --- > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > Does SourceForge.net help you be more productive? Does it > help you create better code? SHARE THE LOVE, and help us help > YOU! Click Here: http://sourceforge.net/donate/ > _______________________________________________ > Linuxsampler-devel mailing list > Lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxsampler-devel |
|
From: David O. <da...@ol...> - 2003-11-03 16:44:28
|
On Monday 03 November 2003 16.37, Mark Knecht wrote: [...] > Should we have a Linux app, inside of LS or stand-alone, that > allows you to get this data from your keyboard and 'tune' LS > somehow to get the right dynamics? I think possibly yes.... I think a tool like that sounds like a useful idea - but it's the=20 keyboard that should be "fixed", not the sampler. If the incoming=20 MIDI events need to be corrected, I'd strongly prefer it if the weird=20 versions never get into sequencers, MIDI processors and the like. //David Olofson - Programmer, Composer, Open Source Advocate =2E- Audiality -----------------------------------------------. | Free/Open Source audio engine for games and multimedia. | | MIDI, modular synthesis, real time effects, scripting,... | `-----------------------------------> http://audiality.org -' --- http://olofson.net --- http://www.reologica.se --- |
|
From: Steve H. <S.W...@ec...> - 2003-11-03 15:53:13
|
On Mon, Nov 03, 2003 at 07:37:06 -0800, Mark Knecht wrote: > The curves themselves don't look that bad to me, albeit a bit strange. > What reason would someone have for deciding that some arbitrary number > of high MIDI velocity events should all produce the same volume? The > only reason I could think of was that some MIDI keyboards, no matter how > hard you hit the keyboard, do not produce a full MIDI Velocity range. There are cetriantly some where in order to produce the full rnage you have to play unconfortably hard (modolu the fact that I'm a lousy keyboard player). Velocity curves are a feature on some controller keyboards, but I guess not all. - Steve |
|
From: Mark K. <mar...@co...> - 2003-11-03 15:37:12
|
On Mon, 2003-11-03 at 07:01, be...@ga... wrote: > Scrive Simon Jenkins <sje...@bl...>: > > > Mark Knecht wrote: > > > > > Another 'bug' I notice, or maybe a limitation of the GSt coding, is > > >that the first 2-4mS of a new pulse has the volume of the pulse > > >preceding it, and then the right volume. I want to look at this more > > >later. I'm wondering if this could be some psycho-acoustic thing they're > > >doing? (More likely just a mistake...) > > > > > Maybe they're lowering latency by starting to play a note before they've > > seen the velocity byte? Even after they've seen it - depending on how the > > engine works - they might have some just-in-time fiddling around to do > > before they're ready to make use of it. > > that would IMHO be totally silly. > You would save about 300usec (transfer time of 1 MIDI byte) plus > what if you play a loud C1 and then a very soft C4 ? I guess this 5msec > part of loud C4 would be very disturbing (if really present). I agree. It doesn't make sense, and it doesn't happen on every note. It only seemed to happen on notes where GSt was making a larger quantization change. However, please remember this is a data point of 1. I had my Hammerfall Light set at the smallest buffer size, which could effect it. I may not have the latest update of GST (although I think I do) which could effect it. It could be based (somehow) on the driver for my sound card. Anyway, I just reported it since I think it's better to have the data out there. It might make sense to one of you, or to someone who joins in later. It makes no real sense to me, and I don't think that we should try to do anything like this until we have a good reason to do it. > > Even this velocity curve quantization looks silly to me, it will make > note volumes more static thus I think it will hurt to note dynamics during > playing. Let's do it right (mapping the approximate curve but without > quantization). > Yes. Having thought about this overnight, I'm pretty sure there is some major bug in the way GSt does its internal mix buses. The MIDI Velocity curves and what they are doing to output volume just look really ugly when channel volume is not at its maximum. I can tell you that I will never again use GSt with the volume set to less than maximum. The curves themselves don't look that bad to me, albeit a bit strange. What reason would someone have for deciding that some arbitrary number of high MIDI velocity events should all produce the same volume? The only reason I could think of was that some MIDI keyboards, no matter how hard you hit the keyboard, do not produce a full MIDI Velocity range. Maybe the idea was to give people a way to get more dynamic range out of GSt when used by that sort of keyboard? Someone on Linux-Audio-User or PlanetCCRMA was asking for exactly this sort of support in the last couple of weeks. Possibly that was part of the reason? None the less, GSt is so badly documented that it takes me 3 hours with Pro Tools to start discovering this stuff. Normal folks aren't going to do this, and their also not going to know what range of velocities their keyboards are producing. Should we have a Linux app, inside of LS or stand-alone, that allows you to get this data from your keyboard and 'tune' LS somehow to get the right dynamics? I think possibly yes.... Cheers, Mark |
|
From: <be...@ga...> - 2003-11-03 15:01:24
|
Scrive Simon Jenkins <sje...@bl...>: > Mark Knecht wrote: > > > Another 'bug' I notice, or maybe a limitation of the GSt coding, is > >that the first 2-4mS of a new pulse has the volume of the pulse > >preceding it, and then the right volume. I want to look at this more > >later. I'm wondering if this could be some psycho-acoustic thing they're > >doing? (More likely just a mistake...) > > > Maybe they're lowering latency by starting to play a note before they've > seen the velocity byte? Even after they've seen it - depending on how the > engine works - they might have some just-in-time fiddling around to do > before they're ready to make use of it. that would IMHO be totally silly. You would save about 300usec (transfer time of 1 MIDI byte) plus what if you play a loud C1 and then a very soft C4 ? I guess this 5msec part of loud C4 would be very disturbing (if really present). Even this velocity curve quantization looks silly to me, it will make note volumes more static thus I think it will hurt to note dynamics during playing. Let's do it right (mapping the approximate curve but without quantization). cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Simon J. <sje...@bl...> - 2003-11-03 14:49:38
|
Mark Knecht wrote: > Another 'bug' I notice, or maybe a limitation of the GSt coding, is >that the first 2-4mS of a new pulse has the volume of the pulse >preceding it, and then the right volume. I want to look at this more >later. I'm wondering if this could be some psycho-acoustic thing they're >doing? (More likely just a mistake...) > Maybe they're lowering latency by starting to play a note before they've seen the velocity byte? Even after they've seen it - depending on how the engine works - they might have some just-in-time fiddling around to do before they're ready to make use of it. Simon Jenkins (Bristol, UK) |
|
From: Mark K. <mar...@co...> - 2003-11-03 13:21:40
|
On Mon, 2003-11-03 at 04:14, Christian Schoenebeck wrote: > Es geschah am Montag, 3. November 2003 00:24 als Mark Knecht schrieb: > > Also there are a large number of possibilities just in this small > > part of GSt's editor. We have the Velocity Response, 3 values, Dynamic > > Range, 5 values, and Curve scaling, 128 values, giving us a 1920 > > possibilities. Not fun. We need to constrain this to a reasonable amount > > of work. > > Agree. I didn't take into account that those other parameters might also have > influence on the curve shape, so I thought it would be just 128 > possibilities. So you're right, that's nonsense work. :) I'm glad we all agree. > > > So, I hope from all of this data you should have enough to get > > started. I don't have the patience (really, I don't!) to try and sit in > > Of course. For the start we'll just implement curves that come close to what > you described and when we finished all the basic stuff maybe we do some fine > tuning if it's needed. Great. I think we probably need 4-5 more curves, which I can check out later this week as I get some time. I am guessing right now that the release sample curves just duplicate the attack curves, but possibly not. To look at that I'd have to make a long sine wave - 30 seconds or so - and then learn how to add a release sample to my gig file. I will likely not have too much time for that until mid-week. I'll let you know. Cheers, Mark |
|
From: Christian S. <chr...@ep...> - 2003-11-03 12:20:01
|
Es geschah am Montag, 3. November 2003 00:24 als Mark Knecht schrieb: > Also there are a large number of possibilities just in this small > part of GSt's editor. We have the Velocity Response, 3 values, Dynamic > Range, 5 values, and Curve scaling, 128 values, giving us a 1920 > possibilities. Not fun. We need to constrain this to a reasonable amount > of work. Agree. I didn't take into account that those other parameters might also have influence on the curve shape, so I thought it would be just 128 possibilities. So you're right, that's nonsense work. :) > So, I hope from all of this data you should have enough to get > started. I don't have the patience (really, I don't!) to try and sit in Of course. For the start we'll just implement curves that come close to what you described and when we finished all the basic stuff maybe we do some fine tuning if it's needed. Thanks again Mark! CU Christian |
|
From: Steve H. <S.W...@ec...> - 2003-11-03 10:10:20
|
On Sun, Nov 02, 2003 at 03:24:26PM -0800, Mark Knecht wrote: > Also there are a large number of possibilities just in this small > part of GSt's editor. We have the Velocity Response, 3 values, Dynamic > Range, 5 values, and Curve scaling, 128 values, giving us a 1920 > possibilities. Not fun. We need to constrain this to a reasonable amount > of work. Those values should all be discoverable in isolation, once we know how they work, the interactions should be clear. - Steve, crossing fingers |
|
From: Steve H. <S.W...@ec...> - 2003-11-03 09:43:37
|
On Sun, Nov 02, 2003 at 11:29:33 -0800, Josh Green wrote: > > Pfft! GObject is a little messy around the edges, but /nothing/ could > > compare to the nightmare mess that is C++. > > > > <dons flame proof suit> > > > > - Steve > > He he thats interesting to hear actually. I haven't used C++ enough to > know its pitfalls, but I have used GObject enough to know some of the > rough edges :) Maybe one of these days I'll actually look into learning > some of the ++ stuff, or perhaps I'll just use Python instead. ObjectiveC is a fine language, and well supported by gcc now (OSX uses it extensivly), but less efficient than GObject or C++. ObjC has OO messages ala Smalltalk, rather than just 'magic function' methods. - Steve |
|
From: <be...@ga...> - 2003-11-03 09:34:44
|
Scrive Mark Knecht <mar...@co...>: > > Hi, > First off, while we want to operate like GSt, in my mind at least we > don't need to be exactly identical. We should do what's right, I think, > and then wait for some feedback from early users as to how it sounds. I > think you'll find from my data below that GSt, while a good program, has > some really strange operation when you dig in as deeply as I did today. I agree Mark, the important thing is that the velocity curves are reasonably similar to those of GSt. The quantization behavior is completely silly IMHO. I think you should go different route: you should try do load your GIG libs and look what kind of curves they use, then take let's say the 5 most frequent curves and map them approximately to values using the sinewave samples. > Another 'bug' I notice, or maybe a limitation of the GSt coding, is > that the first 2-4mS of a new pulse has the volume of the pulse > preceding it, and then the right volume. I want to look at this more > later. I'm wondering if this could be some psycho-acoustic thing they're > doing? (More likely just a mistake...) I assume it is a silly bug. :-) cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |
|
From: Mark K. <mar...@co...> - 2003-11-02 23:24:34
|
On Sun, 2003-11-02 at 08:41, Christian Schoenebeck wrote: > Es geschah am Sonntag, 2. November 2003 15:20 als Mark Knecht schrieb: > > On Sun, 2003-11-02 at 02:15, Christian Schoenebeck wrote: > > > Perfect would be a simple list with > > > > > > velocity value - output level (dB) > > > ... > > > > How can I get you this list and make life perfect? I'd like to learn to > > Unfortunately I don't know of an app that does that automatically, so you > should use your Protools sample editor and read the peak value of each > triggered sample and write that by hand simply into a text file (sorry, > boring work - i know). If your editor allows multiple scales for displaying > amplitude then better do not choose dB. Else we would have to recalculate > that to an non-exp. value anyway, so best would be the direct "raw" value of > the sample point (e.g 39458). Hi, First off, while we want to operate like GSt, in my mind at least we don't need to be exactly identical. We should do what's right, I think, and then wait for some feedback from early users as to how it sounds. I think you'll find from my data below that GSt, while a good program, has some really strange operation when you dig in as deeply as I did today. My first results are sort of strange, and I'm learning a lot as I go along. I hope this ends up making sense. I've downloaded and run Trachtman's MIDI file against his special gig file. Some of those results look pretty good, but none of them look identical to what's on his web site. The MIDI file is good, so I'm using it with a new gig file I built using a 5KHz sine wave on one note. This seems to be working, but it has produced some results I'm not totally comfortable with. Also there are a large number of possibilities just in this small part of GSt's editor. We have the Velocity Response, 3 values, Dynamic Range, 5 values, and Curve scaling, 128 values, giving us a 1920 possibilities. Not fun. We need to constrain this to a reasonable amount of work. OK, so using the default settings (non-linear, high & 20) AND the default GSt mixer settings (-6db on the channel volume, and -6 db on the mixer volume) I actually get VERY strange audio into Pro Tools. Of the 128 pulses I record, there appear to be only 13 distinct volumes. They go in groups that are sort of evenly spaced, so you get sometime 7-10 pulses with the same volume, and sometimes only about 4 with the same volume. Also, with these mixer settings, the linear and non-linear are not that different. Weird, 'eh? I think this may be a bug. Another 'bug' I notice, or maybe a limitation of the GSt coding, is that the first 2-4mS of a new pulse has the volume of the pulse preceding it, and then the right volume. I want to look at this more later. I'm wondering if this could be some psycho-acoustic thing they're doing? (More likely just a mistake...) OK, so not liking ANY of that data, I then changed the default volume settings and pushed them both up to maximum volume. This gives results that are much more reasonable. Here's what I see: NL-high-20 - The first 15 pulses (MIDI velocity 127-113) are at maximum volume. After that there are sort of 2 long straight lines for the rate that audio decreases at, so that it sort of looks like a slow decay. The first group is 21 pulses long (MV 112-92) and gets down to about 45%, then a group of about 28 pulses (MV 91-64) where it goes down to about 15%. After that there is a group that goes out to MV 24 and gets down to about 5%. After that it just stays at 5% from MV 24 to MV 0. L-high-20 - The first 21 pulses (MV 127-107) are at maximum volume. After that it's a linear drop to 2% volume at MV 0. S-high-20 - This is an accelerated exponential drop off. The first 17 pulses are at maximum volume. After that it falls of pretty quickly (very exponential in looks) so that 12 pulses later you're down to around 20%, and then there are sort of 4 equally sized groups of pulses down to about 2% volume at MV 0. Strange results, yes? OK, so more work to do. I then looked at Curve Scaling == 0. This doesn't have much effect on anything with Dynamic Range is set to high. I get curves that are pretty identical to the curves described above. So next I pushed Dynamic Range to low and recorded more data. This was very interesting. NL-low-0 - 40 pulses (MV 127-88) at maximum volume, then a straight line of 10 samples (MV 87-78) down to about 80% volume, then a straight line all the rest of the way to MV = 0 where the volume is about 25% L-low-0 - Very interesting results! 128 pulses at maximum volume. S-low-0 - Almost a copy of NL-high-20 up above. So, I hope from all of this data you should have enough to get started. I don't have the patience (really, I don't!) to try and sit in Pro Tools and make tables of exact values, but I'm not sure it's that necessary. As soon as you get something in LS, I'll build it, record audio using it, and do a visual comparison of these settings and more. Maybe if someone on the list knows how I could take these wave files and extract the exact values automatically, using sox or some other tools, then I'd be happy to try that if it's not going to consume huge amounts of time. Please remember that each wave file is stereo and contains 128 audio pulses. I hope you think this is reasonable. Just getting this data took about 3 hours. 3:15 in the afternoon. No music done yet. I'm gonna write a song today if it kills me! ;-) Let me know if you or anyone else needs additional clarification. Cheers, Mark |
|
From: Sebastien M. <me...@me...> - 2003-11-02 19:44:51
|
As one of the developper of the UVI Sampling Engine (which is behind the plugsounds/spectrasonics/MachFive plugins) I'd be very happy about a "sampler performance test suite" that could be used to test objective and subjective perfomances of all the sampling engines on the market. The suite should include (at least): streaming and non streaming polyphony perfs effects perfs routing perfs memory footprints test signal handling tests and benchs (enveloppes, pitch correctness, LFOs, etc...). It could be a good idea to develop a set of typical samples and methodology to test comercial and non comercial samplers on the market. I'd be quite happy to contribute to such a project, and give the input of my fellow sound designers a USB Sounds. Any one care to comment on this idea? Sebastien be...@ga... wrote: >Some interesting performance comparisons the PMI (they produce piano sample >libraries) guys have done: > >http://forum.cubase.net/forum/Forum24/HTML/000344.html >http://www.northernsounds.com/ubb/NonCGI/ultimatebb.php?ubb=get_topic;f=3;t=005941;p=2 > > >I have to admit it NI Kontakt is damn efficient but I think we >can achieve a similar number of voices. >(let's wait for the envelopes, sustain pedal etc working then >we will able to produce useful numbers) > >Benno > > |
|
From: Josh G. <jg...@us...> - 2003-11-02 19:32:18
|
On Fri, 2003-10-31 at 02:44, Steve Harris wrote: > On Fri, Oct 31, 2003 at 02:21:08 -0800, Josh Green wrote: > > On Wed, 2003-10-29 at 03:19, be...@ga... wrote: > > Right, OO programming :) C does that as well (with the help of GObject), > > but its not nearly as clean as C++. Someday I may regret not just using > ... > > Pfft! GObject is a little messy around the edges, but /nothing/ could > compare to the nightmare mess that is C++. > > <dons flame proof suit> > > - Steve He he thats interesting to hear actually. I haven't used C++ enough to know its pitfalls, but I have used GObject enough to know some of the rough edges :) Maybe one of these days I'll actually look into learning some of the ++ stuff, or perhaps I'll just use Python instead. Josh |
|
From: Mark K. <mar...@co...> - 2003-10-31 21:43:25
|
On Fri, 2003-10-31 at 13:05, Christian Schoenebeck wrote: > > I'm waiting for Christian's opinion (here on the list_) too whether we > > should do this quick'n dirty hack otherwise I risk getting beaten by him > > :-) > > I will definetely hurt you Benno if you do that! No Mark, we will add the > remaining basic stuff very soon (envelopes, sustain, etc.). Fortunately I > have time for that now, so maybe we can finish the envelopes this weekend. > > CU > Christian Just what I like...CLEAR communication! ;-) I'll probably have filter data for Steve by Saturday around noon my time. I'm going a bit slowly to learn about octave, check the data a bit myself, and learn how to use those tools. I hope this saves Steve from any stupid mistakes on my part. Maybe in another week or two we'll have the next big step in functionality. Sounds like fun! Cheers, Mark |
|
From: Christian S. <chr...@ep...> - 2003-10-31 21:10:41
|
Es geschah am Donnerstag, 30. Oktober 2003 20:46 als be...@ga... schrieb: > > Don't go at this idea too fast, or look for too many people too > > early, but just keep the possibility in mind. Instead of implementing > > full blown ADSR's, just build in a fail safe click eliminator and then > > turn us loose! ;-) > > While your toughts make sense I don't think that this hack will be > that beneficial since it still takes away some time. > [snip] > > I'm waiting for Christian's opinion (here on the list_) too whether we > should do this quick'n dirty hack otherwise I risk getting beaten by him > :-) I will definetely hurt you Benno if you do that! No Mark, we will add the remaining basic stuff very soon (envelopes, sustain, etc.). Fortunately I have time for that now, so maybe we can finish the envelopes this weekend. CU Christian |
|
From: <be...@ga...> - 2003-10-31 13:22:20
|
Scrive Mark Knecht <mar...@co...>: > > If everything looks good, then we see about getting some real GSt > library developers involved. I'll drive down to L.A. in December > sometime and set it up for one or two of the guys down there if > required. (Heck, I'll even set it up for Hans Zimmer, user of 30 copies > of GSt, if he asks nicely!) ;-) (OK, even if he doesn't as I'd like to > meet him.) > > With everything above working, you release publicly and get your 'shock > and awe' strategy. > > Anyway, that's my thoughts right now. Sort of bold for me to put all > that down. I'm not tied to any of it, but hopefully the list would be > something others would add to or rearrange as the group sees fit. Mark your strategy makes sense to me. For now don't announce LS to the world, it is currently a developer version and as long as it does not provide the basic features to be usable by non technical users, provides stability and correct audio playback of at least 95% of the available sample libraries we should not announce LS anywhere except in developer circles (this list and possibly LAD). The fact that the GUI I'm writing runs on on multiple platforms (written in Qt thus runs on Windows and Mac too) will probably be very beneficial for those that want dedicate entire PCs to LS. The cheap price of hardware is a big plus for us. For the price of windows xp + a good softsampler you can buy a relatively powerful PC with a soundcard (perhaps even a 24bit card like the m-audio delta cards which start from $200-$250). Powerful hardware cheaper than commercial software .... silly :-) LS .... the poor man's "Hans Zimmer Studio" :-) cheers, Benno ------------------------------------------------- This mail sent through http://www.gardena.net |