From: Heikki L. <hol...@cs...> - 2009-05-26 20:25:58
|
Pieter Palmers kirjoitti: > Jonathan Woithe wrote: > >> Hi Heikki >> >>>>> The mixer code for ALSA is in-kernel, and doesn't ffado, at least >>>>> eventually, need to convert mixer controls to ALSA, too, to be a >>>>> good ALSA citizen(?) In that case the communication between the >>>>> mixer and streaming could go in-kernel. >>>> >>>> There is no current plan that I am aware of to utilise the alsa >>>> mixer model >>>> for ffado devices, or in fact to encapsulate all of ffado in an alsa >>>> driver. One reason is that the control of these devices is quite >>>> complex and >>>> device-specific. Any in-kernel solution would consist of a lot of code >>>> which, it has been suggested, does not really belong in the kernel. >>>> >>>> The point of the kernel ffado streaming module is not to put ffado >>>> in its >>>> entireity into the kernel - all we really wish to achieve is a data >>>> bridge >>>> between userspace and the firewire hardware. The motivation for >>>> doing the >>>> bridge in-kernel (as opposed to entirely in userspace as we have >>>> now) is >>>> that we will get much better timing guarantees and more direct >>>> access to the >>>> firewire hardware. The suggestion to use an ALSA-compatible PCM >>>> interface >>>> for the streaming data was made simply because alsa-pcm is an >>>> already-existing standard, and the connection to it for audio data will >>>> therefore be possible from a wide range of programs (not just jack, >>>> as is >>>> the case now). >>> >>> Yeah, I've been following this much :) But I'm sure someone will >>> suggest moving the mixer parts to the kernel / implementing an ALSA >>> mixer device, once the pcm device is there. >> >> >> The difficulty is that there isn't a single set of defined methods to >> control these devices - each family of devices has their own way of doing >> pretty much everything, and there are often non-trivial dependencies >> between >> controls and their values. For example, if the sampling rate is 96000 Hz >> the available ADAT modes will change. This kind of thing amounts to >> "policy" in the kernel maintainer's eyes and they are fighting to keep it >> out of the kernel. If we were to port all the control code into the >> kernel >> as an alsa mixer the result will be huge, in part due to the need to >> implement it all using a generic framework. >> >> That's not to say that it's impossible - it's just that the diversity >> between ffado devices doesn't seem to be a good fit with the alsa >> mixer API. >> >> Anyway, I'm sure Pieter will have something to add to this discussion >> since >> the push to put only the streaming bridge into the kernel came initially >> from him. >> >>>>> If there's a good case for it, I'm sure the ALSA maintainers would >>>>> consider API additions, too. >>>> >>>> The control of these devices goes beyond the "mixer" paradigm and is >>>> really >>>> outside the scope of what the alsa mixer framework was set up for. >>>> Short of >>>> major additions to the mixer API (so applications have a hope of >>>> laying out >>>> a ffado device's controls in something approaching a usable manner) >>>> it's not >>>> going to work for us. You'll note that the authors of drivers for >>>> the more >>>> complex PCI cards have already come to the same conclusion - we have >>>> dedicated control applications in the form of envy24control and one >>>> for the >>>> RME hammerfall devices for example. FFADO devices are pretty much >>>> in the >>>> same boat here. >>> >>> Could you elaborate on the "outside of the scope" bit? I looked at >>> the envy24control sources briefly, and it very much looks like >>> they're using the snd_ctl_* ALSA APIs for the actual control, ie. >>> they're not poking the hardware registers from the mixer app. What >>> prevents us from doing the same? >> >> >> I haven't looked at envy24control at all so I can't comment on how it >> specifically does things. >> >> I kind of outlined some of the difficulties previously. The available >> controls - and the range of possible values of controls - can change >> quite >> significantly depending on the setting of other controls. I gave one >> example previously. The alsa mixer API does not, AFAIK, have any >> mechanism >> for dealing with this kind of behaviour. > > > At present I'm pretty ok with the way we're doing things with respect to > control. > >> >>> The layout (and inter-control logic) of the mixer could still be >>> implemented as a separate app like how envy24control does. >> >> >> Again, I can't comment on envy24control. However, one problem I see with >> this is that because an alsamixer interface was provided it doesn't stop >> someone firing up a standard mixer application and having a play. This >> mixer won't know any of the inter-control logic and will just let the >> user >> set things however they like. At best things will fail silently >> leaving the >> user puzzled as to what's going on. At worst it'll end up putting the >> device into some invalid mode unless the kernel driver was also aware >> of the >> inter-control logic and knew what to limit and when. However, then we >> have >> inter-control logic in two places, creating a maintenance nightmare. >> >> Anyway, I'm sure Pieter has more coherent arguments than what I've >> cobbled >> together on the run today. :-) > > > I see no advantages in moving the mixer code into the kernel. Would that > be enough for an argument? > > Plenty of disadvantages have been noted already: > * there is no generic way to do control all devices. Even the same > device behaves differently depending on the firmware version > * There is too much code. Even for simple things as figuring out what > the capabilities of a device are, the code can be huge (especially for > AV/C devices). Mixer control can be even worse. > * Kernel space is not a nice development environment. > * The API's offered by ALSA for mixer control don't offer any advantage > over our own API's. > > Let me give you a challenge from real life: > I'm working on the DICE jr/mini routing and mixer control at present. > How this part of the DICE works is as follows: you have a set of > functional blocks that can produce or consume audio. Amongst these are > e.g. the 1394 stream RX/TX, the ADAT I/O, InS ports (for ADCs or DACs), > etc... for a total of more than 40 input and 40 output channels. Then > there is the mixer that can consume 18 channels of audio, and produces > 16 channels of audio. > > The router allows to connect any audio producing channel to any audio > consuming channel. > > The mixer is a 18input, 16 output full matrix mixer (i.e. you can mix > all inputs independently to one or more outputs). This means 18x16 > coefficients (i.e. 'faders'). > > If you can convince me that ALSA can handle this in a generic (and > efficient) way, and have an advantage over userspace, I might reconsider > my opinion. > > Note that I'm not against the idea by definition, I just fail to see the > advantage here. Why move something into the kernel that doesn't need > exclusive device access nor needs realtime? I think the answer to this last question would the original problem that Jonathan was pondering. All the rest was just, at least from my part, throwing half-assed ideas. FWIW, I'm not advocating a movement to kernel or against it either. If things can be done neatly in user-space then look no further. -- Heikki Lindholm |