From: Jonathan W. <jw...@ph...> - 2009-05-25 23:44:52
|
Hi Over the past few days I've been mulling over the proposed ffado kernel streaming module to see if there are any issues we ought to at least ponder ahead of time. The only issue I can foresee at present is how sideband data will fit in to the whole scheme of things. Allow me to explain. In addition to audio and midi data streams, some devices use the main data stream from the device to send additional operational data. An example of this is the MOTU devices - there is a substream defined for these devices which provide device setting updates. When utilised, this allows any running mixer/control programs to be updated if a setting is changed on the device's front panel. The question then arises as to how we might deal with streams like this when the streaming code moves into the kernel. Using an ALSA-compatible device for the audio data is fine and there are excellent reasons for doing this. However, there is nothing in the ALSA API which would allow us to send these additional sideband streams out to userspace. I can think of a few possible ways to approach this but there are undoubtedly more. The first is to simply ignore the existance of these other streams but that precludes us ever attaining the full functionality of the device concerned. The second is to define an additional ffado-specific kernel-userspace channel to allow such information flow (possibly via a char device node). This is probably the "neatest" solution but does require the development of an additional kernel-userspace API, something which may prove difficult to get through lkml. A variation on this is to make such information available via a sysfs interface which will require more interpretive code in the kernel but doesn't require the addition of a new userspace API. The final alternative I thought of is to require userspace to hook into the 1394 iso stream itself if it's interested in the sideband data. I don't really like this idea for a number of reasons: it's conceptually messy, it splits stream interpretation between the kernel and userspace, we have multiple processes reading data from the same device, and so on. Of these ideas, the utilisation of a sysfs interface appeals to me. One disadvantage of this is the one-value-per-file restriction which means we could end up with a very large sysfs directory (in the case of MOTUs we'll end up with one sysfs file for each mixer control). It also requires that the sideband streams be decoded in the kernel - but then again, the audio streams will already require decoding so this is conceptually no different. I think the only other feasible alternative of those presented is the char device - in this case we'd just pass sideband streams out as-is and rely on userspace to make sense of them. I would be interested to hear what others have to say on this topic. It is probably something we ought to at least give some thought towards before implementation of the kernel module commences to ensure that structural decisions aren't made now which will complicate this aspect of things later on. As a final note I'll add that currently the MOTU streaming driver does capture and decode the sideband data. However, the data isn't currently used by motumixer because there's no easy way of passing this data from the streaming engine (jackd/libffado) to the mixer application - I'm still pondering this issue. Regards jonathan |