From: Colin W. <wa...@re...> - 2004-11-25 05:03:42
|
On Wed, 2004-11-24 at 15:02 +0100, Benjamin Otte wrote: > There is one thing that really annoys me about making these sorts of > interfaces: an audio sink is not a sample cache. True. > We're currently > What I would prefer would be an abstraction that makes audio frameworks a > first class struct in GStreamer. > Kinda like this: > struct GstAudioFramework { > GType inputelement; /* GST_TYPE_ALSA_SRC */ > GType outputelement; /* GST_TYPE_ALSA_SINK */ > GType volumechanger; /* GST_TYPE_ALSA_MIXER */ > ... > }; > > We could then add _one_ interface to each element that requests the audio > framework. After that we could nicely query the framework about supported > capabilities and also add capabilities later on without the need to > fiddle with the elements and risking to break playback with every change. Why not do it the other way around: Make GstAudioFramework a first-class object (plugin) and have methods for retrieving the source, sink, volume control, and sample caching elements. This is conceptually cleaner I think because that way they can easily share data such as a connection to the sound server, or an open fd on a sound device. > As for caching dings in sound servers: I haven't played with the > different sound servers enough to know if it'd be better to have an > element as a "cachesink" or if it would be better to make every audio > framework provide some functions that implement simple things like guint > id = gst_audio_framework_cache_file (framework, filename); and > gst_audio_framework_play_cached_file (framework, id); Well, maybe it's a bit of over-engineering, but it might be nice to be able to easily cache a short Vorbis file on a sound server that only supports e.g. wav. So you'd call gst_audio_framework_get_cache_sink, and then set up a pipeline with it. If your sound server does Vorbis, then you can cache the compressed version too. All of this could be hidden behind a simpler interface like GstPlay does. > The thing I'm not so sure about though is the fact that you want to get > the ding caching method from looking at the audio sink. Those two are > likely related, but could be completely different. Like using alsa to > output to $soundserver could still mean I want stuff cached in > $soundserver directly. I can't think of when you'd really want that. > Another thing that noone has bothered thinking about so far is > live-updating when GConf keys change. If I change my audio sink from > "esdsink" to "polypsink", all my ding caching apps are going to relocate > there sound samples to the new server, right? Yes, well...hard :) |