RE: [GD-Windows] Streaming WAVE audio
Brought to you by:
vexxed72
From: Brian H. <ho...@bo...> - 2004-07-09 05:21:12
|
I may actually win the award for oldest thread resurrection (this is about 2.25 years old). So I'm rewriting my streaming audio code and open sourcing it (http://www.bookofhook.com/sal for those that are curious -- it's kind of pre-alpha right now but seems to work for the most part on DirectSound, OS X/CoreAudio, OSS/Linux, and ALSA/Linux). When revisiting my streaming audio code, I realized I was using three different buffers when in _theory_ I should be able to use one largish buffer and just query the output position and write the necessary number of bytes out to it when I'm done. The key to this is that the buffer is marked as WHDR_BEGINLOOP | WHDR_ENDLOOP. I then query the output position in my mixing thread with waveOutGetPosition(). This works on my system, but it seems to start failing on some systems randomly depending on the output format (e.g. one user reported that 44KHz worked fine, but there was massive noise/distortion on 22KHz, using the same executable I was using successfully). So the question is: is there a definite known problem doing it this way instead of multibuffering? I can go back to multibuffering fairly trivially and just do something like: queue( buffer[0] ); queue( buffer[1] ); Then in my main audio thread: while ( 1 ) { if ( buffer[ 0 ].is_done ) { fill( buffer[ 0 ] ); waveOutWrite( buffer[0] ); } if ( buffer[ 1 ].is_done ) { fill( buffer[ 1 ] ); waveOutWrite( buffer[ 0 ] ); } sleep( buffer_len/2); } You get the gist -- poll for completed buffers and then fill them up on demand, relying on sleep + poll instead of an event system or callbacks or a separate thread function. Brian |