|
From: -oli- <ol...@fr...> - 2002-04-08 14:44:42
|
Hi, all! As I wrote some weeks ago, I'm going to integrate OpenAL-support for the sound in OpenVRML. Here I have some questions regarding this: (1) In which method(s) can I implement the assignment of the spatial parameters (position, direction) of the sound to the OpenAL properties? NodeSound::render() or NodeAudioClip::update() or NodeAudiClip::render(), which doesn't exist yet. Regarding this, the method should be called whenever the position of the Sound-Node or the ViewPoint-Node changes. Besides that, also the time-dependend functionality in NodeSound::update() should be maintained. (2) The WAV-File is read totally into memory at the moment (see audio.h) and then put chunk by chunk to the dsp-device (see NodeSound::update()). Is this right? Btw: OpenAL (see http://www.openal.org/) is an OpenSource platform independend API for 3D-audio, developed by Loki Software, now maintained by Creative Labs. As OpenAL uses a cones and VRML ellipsoids to define source directivity, I'm going to use OpenAL only for the location of the sound sources (spatialisation) and do the directivity stuff in OpenVRML "by hand". Any tips/opinions/hints are welcome! -oli- (Oliver Baum) |