From: Jonathan W. <jw...@ph...> - 2010-12-07 10:34:12
|
Pieter Palmsers wrote: > First of all I think that we don't really have a choice than to move to > ALSA for the streaming engine(s). Not for technical reasons per-se, but > given the Linux audio ecosystem as a whole. > > The situation with respect to audio API's on linux is bad enough as it > is, and currently we're making it only worse. Moving to ALSA will at > least remove FFADO from the list of troublemakers. I guess my take on this is that I have never really viewed the ffado API as something for "public" consumption. Instead I've always seen it as the glue which permitted jackd to make use of firewire devices. In this respect the jack API is really the current ffado streaming API. > The device aggregation is something that is best implemented in > userspace (e.g. jack) for the following reasons ... That's fair enough - they all sound like logical arguments. I don't really have the the requisite background knowledge to land on a particular side of the fence here. > Actually I don't really see a technical problem with having multiple > kernel streaming drivers for the different streaming protocols. At worst > they are completely separate implementations that expose a similar > kernel/userspace API ... Fair enough. The more I ponder the options here the more I think we will need separate streaming drivers for each protocol. Trying to shoehorn everything into a single driver isn't going to be all that efficient in either maintenance or performance. > >> And if this can result in an additional alsa soundcard that just contains > >> all the io channels where streaming was activated from a user-space app, > >> that is nice. But unless thats to hard, all devices on the firewire bus > >> should result in one alsa- device. > > > > Again, the question would be whether such an in-kernel streaming engine > > could do this across device protocols. But in principle I agree. > > It seems I don't. What would be the exact arguments for this? I can only > see one being 'user-friendliness'. And the fact that it's nice from a > conceptual point of view. Yeah, that's pretty much it from my POV. > In any case I believe that its best to take all of this one step at a > time, not trying to solve everything at once: > 1) an ALSA streaming driver for IEC61883 devices (i.e. BeBoB, DICE and > ECHO devices), > * userspace device discovery > * no aggregation > * limited abstraction > * streams are 'always-on' as is the case for the drivers on the other OS'es. > * no outgoing stream w/o an incoming stream > > 2) start porting it to MOTU and RME When you talk about "streams are 'always-on'", what are you referring to? The streams as in the iso data stream, or individual channel streams? If the former (iso data streams) then this will pose a problem for MOTU and RME. Under other OSes the iso streams are not always on for a very good reason: when these devices are streaming it is not possible to change some relatively important device settings via software (sample rate, optical port mode, and so on). If in stead you're referring to audio channel streams, the "always on" assumption doesn't hold at the hardware level for either MOTU or RME. There will be a need for the streaming driver to vary the number of audio channels being sent and received based on hardware configuration. > The only real point that still has to be resolved in that driver is how > to treat/use timestamps. I'll have to think about it some more as IMHO > it is pretty essential to do proper aggregation. The timestamps are relatively important to ensure there is some bounded synchronisation between the incoming and outgoing audio streams too. Then again, ALSA seems to make do without them at the userspace API level at least. > I see two major points where the current FFADO needs improvement: > (1) device management code (i.e. discovery, mixer control). this needs > some improvements to avoid that things crash when a device is unplugged. > (2) streaming engine > > For (1) I think a smooth refactoring path is the better option. This is > not an enormous effort and IMHO is more bugfixing than refactoring. As > pointed out earlier in this thread, it's also not our core thing. > Streaming is. To a point, yes. However, onboard mixers and other DSP is becoming a significant part of the specifications of these audio interfaces - especially the latest generations. People are going to want to control this stuff - perhaps not quite as much as getting audio in and out, but it is becoming increasingly important. Plus of course with some devices (eg: RME) it is simply not possible to set the device to stream meaningful audio data without first having control of the device's settings and the mixer. For these devices, device management code is as core as streaming because streaming requires it. > For the streaming engine basically at this point there are three options: > 1) continue with the current engine and libraw > 2) continue with the current engine, directly on the new stack > 3) replace the current engine with an ALSA based solution, possibly > based upon Clemens' work. > > Given the extremely limited development resources we have, we can't do > all. I would personally choose option 3 as it is IMHO the best long-term > solution. I also don't think its very unrealistic. We have a test > platform in the form of the current FFADO codebase, which is quite > valuable. Certainly I think that 1 is out - at the very least we should aim for 2 simply because raw1394 introduces complications we could do without. Further issues can certainly be alleviated by moving the streaming into the kernel. The thing which concerns me about this also comes down to the limited resources we have. If the move takes 12 months (say) then that's effectively 12 months where developers like myself are in no-man's land. Anything we code for the old engine is wasted effort in that it will be rendered obsolete in 12 months time and have to be recoded to work with the new streaming engine. On the other hand, we could devote all our time to the new engine, but that will only be workable in 12 months (so from the point of users, development will freeze for 12 months). It's not an easy balancing act IMHO. > Furthermore it has been mentioned several times already that current > FFADO works reasonably well, even for the DICE devices. The urgency of a > reworked version or a replacement is therefore IMHO not very high. True. But that still leaves the question of what one should be writing against when it comes to adding new device support over the next 12 months or so. I personally don't really have the time to get streaming working under one engine only to have to start from scratch with a new engine in 6-12 months time. There are clearly divergent ideas floating around about this and I think it is important that we aim to make a firm decision on this sooner rather than later so that we all know where things are heading. We can then target our limited development time in light of this. In the event the decision is to run with a kernel streaming engine then I think it's also important that there is a commitment to a timely and coordinated approach to getting some working code in the short term. I don't believe it's helpful to ourselves or our users to have the kernel streaming development drag on for months on end. Unless we can collectively find the time to agree on the general architecture and code up a starting point then the decision becomes pointless. That is, it is completely pointless agreeing to do kernel streaming and then have nothing done for 12 months because no one has the time to make it a reality. As a case in point, I am trying to finish off the RME streaming engine at present. It's a challenge due to the way those interfaces work but I think it's heading in the right direction. However, if it turns out that the existing streaming engine will be abandoned then I'm not convinced it's worth my time to continue down this path - if it is decided that we're moving to kernel streaming then I might as well devote my RME time to making that work instead. At the end of the day I'll cope with whatever decision is made. However, things have come to a head now and we need to put the decision to bed one way or another. Ultimately I think it's Pieter's call since he wrote the existing streaming infrastructure and knows it and its limitations better than most. In an attempt to sum up the kinds of things required of the streaming engine for the devices I maintain I'll post a new article soon. Regards jonathan |