|
From: Braden M. <br...@en...> - 2002-04-09 00:43:36
|
On Mon, 2002-04-08 at 10:41, -oli- wrote:
> As I wrote some weeks ago, I'm going to integrate OpenAL-support for the
> sound in OpenVRML.
Okay. I'm gonna be a bit more explicit than I was before in order to
ensure expectations are set appropriately. I don't want you to complete
this project and then be surprised by its reception.
I don't think OpenAL is the best solution for audio in OpenVRML. At one
time it looked like it might be; however...
* OpenAL's primary user and developer (Loki) is no more.
* There was never a (successful?) attempt to package OpenAL in a
way that would be palatable to end-users and OS distributions.
Even now, the OpenAL Web site directs persons looking for
"1.0" to CVS. The project has reached 1.0, and there's *still*
not a tarball.
So OpenAL is difficult for end-users to install, and prospects for
continued support and development seem dim.
What this means:
* *Yes*, I would accept a patch for OpenAL-based audio support.
* This support would be disabled by default. If someone wanted
to build with it, they'd need to configure with
"--with-openal" or somesuch.
* If OpenVRML also had support for some other audio solution (I
mentioned GStreamer before, and I still think it's our best
bet under Linux), I would not be inclined to spend too much of
*my own* energy maintaining multiple audio solutions in
parallel.
If you think I'm trying to steer you toward GStreamer instead of OpenAL,
you'd be right. However, I appreciate that you may have your own
motivation for wanting to use OpenAL, and I have no intention of
belaboring the issue beyond this posting. I appreciate that your work
*will* improve OpenVRML, even if it's not exactly what I'd do. :-)
> Here I have some questions regarding this:
>
> (1) In which method(s) can I implement the assignment of the spatial
> parameters (position, direction) of the sound to the OpenAL properties?
> NodeSound::render() or NodeAudioClip::update() or NodeAudiClip::render(),
> which doesn't exist yet.
That logic belongs in the Sound node.
> Regarding this, the method should be called
> whenever the position of the Sound-Node or the ViewPoint-Node changes.
> Besides that, also the time-dependend functionality in NodeSound::update()
> should be maintained.
In the old architecture, that logic would belong in VrmlScene. In the
new architecture, it belongs in SoundClass.
> (2) The WAV-File is read totally into memory at the moment (see audio.h)
> and then put chunk by chunk to the dsp-device (see NodeSound::update()). Is
> this right?
As long as we aren't bothering with streaming, I think that's
acceptable. However, I'm not much of a "sound guy", so I might not know
what I'm talking about here.
--
Braden McDaniel e-mail: <br...@en...>
<http://endoframe.com> Jabber: <br...@ja...>
|