|
From: <be...@ga...> - 2003-11-04 11:32:43
|
Scrive David Olofson <da...@ol...>: > > > > Well I agree that broken velocity curves in MIDI keyboards are not > > the sampler's problem but giving the user the possibility to remap > > the velocity curves can be handy in some situations. > > After all its just a simple table with 128 entries. > > velocity_out=velocity_remap[velocity_in] :-) > > Sure, it's a nice feature, but fixing broken input might not be the > best use for it, at least not when you have sequencers and other stuf > in the setup as well. > > That said, if we had a plugin API that can handle MIDI plugins (like > XAP or GMPI), we could just have the sampler host the "MIDI > corrector" plugin (and all sorts of other event processors and stuff > for that matter), to avoid other hosts and potential latency issues. > When using a sequencer, it would be natural to run the plugin on the > sequencer's inputs instead, so you get "normal" events in the tracks. yes GMPI will be very cool, finally a free VSTi ? :-) I heard Steinberg is not partecipating (boycotting it ?), I guess because VST ist the de-facto standard and it is advantageous for them controlling the format possibly giving them an edge over the competition. Sad .... , I always heard "opensource leads to fragmentation" .... but to me the Windows audio world seems much more fragmented. VST, DirectX, RTAS, EASI, GSIF, Rewire .... on linux things do look much better: jack and LADSPA. I think an important API that is still missing is a sort of VSTi, would it better to extend LADSPA to support MIDI in too or is it better to wait for GMPI ? I think if we had a VSTi-like API it would lead to a big proliferation of soft-synth and sampler plugins. But for now we have to write standalone apps that output the audio via jack and perform MIDI input via the ALSA interface. (which is nice but does not offer synchronous transfer, sample accuracy etc) > On Monday 03 November 2003 18.57, Mark Knecht wrote: > [...] > > If the conversion is not part of LS, then what's the additional > > latency incurred when playing a keyboard live? How long are MIDI > > event's delayed going through a completely separate app? Is it > > deterministic, or will it vary from event to event? > > If you have an additional process that must be scheduled to process > every event, there is a significant risk of increased worst case > latency. I'm not sure how likely it is that an event is hit by two > "slow" reschedules (one in the MIDI processor and one in LS), but > it's most probably possible. Same applies to JACK but I think with a good lowlatency kernel the additional latency is measured in 50-100usec max. This means writing an ALSA user space midi router pratically does not degrade the MIDI timing. I´ve seen keyboards that have builtin sequencers that run with a 500Hz (2msec period) timer and the MIDI files sound very well. So the ALSA user space router is absolutely not a problem. > > RT-Dave, the control engineer, would assume this *will* happen > occasionally, effectively doubling the worst case latency, until > proven wrong. ;-) I´ll do some latency graphs with jack + jack client when adding jack support to LS so we can measure if direct ALSA output vs jack output. I think with the right lowlatency kernel jack output a 3msec can be done reliably and that is that LS needs for tight note-on response. > > There most certainly will be an increase in minimum and average > latency, but as long as event processing is done "instantly" (by > blocking on input and sending the resulting events instantly; no > timers and stuff in the MIDI processor), that effect should be > insignificant. (Microseconds...) Exactly ALSA userspace MIDI routers are implemented using poll() so they block until a MIDI event arrives. This means they respond instantly (minus the context switch time) > > > [...] > > On the other hand, since almost all of my MIDI goes through the > > Alsa stack somehow, and I view connections with kaconnect, could > > that be a place to put these velocity modifiers? > > Well, that was my first though when I started following this thread - > but unfortunately, ALSA runs only on Linux. (And QNX, though I have > no idea what state that stuff is in nowadays.) Don´t worry ALSA´s MIDI timing is excellent, no QNX needed. > > It would be nicer IMHO, if things like the "MIDI corrector" could use > some portable plugin API - but OTOH, it can't be all that hard to > port it to various APIs. No big deal. What's important is that it > runs at the right place in the chain, and that it doesn't add > significant latency. Of course a builtin MIDI corrector is better (eg the table lookup) but the ALSA userspace router is ok too. Anyway it is just a waste of time discussing about the midi corrector stuff, we have much bigger problems, getting looping, enveloping, LFOs and articulation working. David: I told Christian we should implement a simple sample accurate event system in LS right from start because it will avoid us many troubles later. For example we can use the event system to do fast enveloping (lists of linear segments, this means sample accurate ramping and very fast because you only need to increment the pitch value (pitch modulation) and/or volume value (volume modulation) by a delta amount. Exponential curves can be approximated by a succession of linear segments and you could even modulate the pitch/volume in an arbitrary way by sending events with a frequency of eg. samplerate/4 which would still be very fast. With current ALSA MIDI IN we cannot yet exploit the sample accurate event system fully but if something like VSTi for Linux comes out LS will be ready for sample accurate MIDI events. Not to mention that we can lower the current realtime MIDI IN jitter when LS is played live thanks of delaying the midi in events based on the timestamp (we run the MIDI IN thread with higher priority than the audio thread thus MIDI in timing can have sub fragment-time precision). When some event code will be available in LS I´d like you David to review it a bit since you are the timestamped-events master :-) BTW: the switch() statement seems faster than function pointers since it constructs a jumping table and does not need to save the return address on the stack. I think switch() will be ideal in the audio rendering routine to select various rendering functions, eg. sample playback with linear interpolation, cubic, with and without filter etc. cheers, Benno http://www.linuxsampler.org ------------------------------------------------- This mail sent through http://www.gardena.net |