Thread: [GD-General] Writing an audio mixer
Brought to you by:
vexxed72
From: Brian H. <bri...@py...> - 2002-03-15 20:17:51
|
During the lull between Candy Cruncher and our next product I'd like to take a day and write our own audio mixer to minimize our dependencies on per-platform audio mixing capabilities. We can survive without hardware mixing (in fact, we don't support it right now at all because it's so damn flaky). I've never done an audio mixer before, but I understand the general theory of audio mixing. Anyone have any pointers, tips, gotchas before I dive into it? The overall architecture I envision is that the app will queue up sounds that will be told to play by our sound manager object. At regular intervals the sound manager is told to mix the currently active audio into a buffer that gets dumped to whatever output device we have. I already have a good chunk of this in place because that's how our streaming audio works -- in fact, it would be relatively trivial for me to just have the streaming audio thread mix in external sound buffers. The only bit I'm concerned about is servicing the audio via a thread vs. servicing the audio during game updates. A separate thread (or interrupt on MacOS....God, will that operating system PLEASE die?!) at least prevents me from having the audio puke on a long file load. Brian |
From: Mads B. D. <ma...@ch...> - 2002-03-15 21:10:23
|
On Fri, 15 Mar 2002, Brian Hook wrote: > > During the lull between Candy Cruncher and our next product I'd like to > take a day and write our own audio mixer to minimize our dependencies on > per-platform audio mixing capabilities. We can survive without hardware > mixing (in fact, we don't support it right now at all because it's so > damn flaky). [..] > > The only bit I'm concerned about is servicing the audio via a thread vs. > servicing the audio during game updates. A separate thread (or > interrupt on MacOS....God, will that operating system PLEASE die?!) at > least prevents me from having the audio puke on a long file load. Have you considered looking at the SDL? www.libsdl.org - it works on Mac, Linux, Windows and a number of other platforms. It uses the LGPL license, which means that you can do commercial stuff with it as well as OSS. (Loki beeing a prime example of the commercial approach - economic problems aside). Take a look at it - it is not very sophisticated, but it does work. Another candidate could be "OpenAL" - www.openal.org. The website looks quite stale, but there are current stuff in the cvs. LGPL as well (IIRC). Supported platforms include at least Linux and Windows. (I can't remember if Mac is supported). Mads -- Mads Bondo Dydensborg. ma...@ch... The Microsoft Dictionary: interoperability: The ability of a Microsoft product to operate with another Microsoft product. |
From: <cas...@ya...> - 2002-03-16 04:09:23
|
Hi, I will be writting sound mixer next week, and I have to admit that I have no idea of sound programming. I could use sdl or any other library, but i just want to do that myself to learn how it's done. I've been looking for info about sound programming witout much look, I've only found low level details about the sound blaster :-( So if somebody knows where to find a good tutorial about this, I would really appreciate it. Anyway, I just can learn that by reading source code, for example, quake's sound engine runs on every game update, while sdl's and minifmod's run in a different thread. Updating the audio in the game loop seems to be easier to me, at least i have to write less platform independant code, but it seems that you have that work already done. On the other hand threaded audio seems to be the right thing, since most libraries do that. Ignacio Castaño ca...@as... Brian Hook wrote: > During the lull between Candy Cruncher and our next product I'd like to > take a day and write our own audio mixer to minimize our dependencies on > per-platform audio mixing capabilities. We can survive without hardware > mixing (in fact, we don't support it right now at all because it's so > damn flaky). > > I've never done an audio mixer before, but I understand the general > theory of audio mixing. Anyone have any pointers, tips, gotchas before > I dive into it? > > The overall architecture I envision is that the app will queue up sounds > that will be told to play by our sound manager object. At regular > intervals the sound manager is told to mix the currently active audio > into a buffer that gets dumped to whatever output device we have. I > already have a good chunk of this in place because that's how our > streaming audio works -- in fact, it would be relatively trivial for me > to just have the streaming audio thread mix in external sound buffers. > > The only bit I'm concerned about is servicing the audio via a thread vs. > servicing the audio during game updates. A separate thread (or > interrupt on MacOS....God, will that operating system PLEASE die?!) at > least prevents me from having the audio puke on a long file load. > > Brian _________________________________________________________ Do You Yahoo!? Get your free @yahoo.com address at http://mail.yahoo.com |
From: Mads B. D. <ma...@ch...> - 2002-03-16 10:38:50
|
On Sat, 16 Mar 2002, Ignacio Casta=F1o wrote: > Hi, > I will be writting sound mixer next week, and I have to admit that I have= no > idea of sound programming. I could use sdl or any other library, but i ju= st > want to do that myself to learn how it's done. I've been looking for info > about sound programming witout much look, I've only found low level detai= ls > about the sound blaster :-( > So if somebody knows where to find a good tutorial about this, I would > really appreciate it. I do not know about tutorials, but I wrote my own mixer once in about 1000 lines of code. It used a seperate thread, because I thought that to be easier. The problem with my code was that I could only support mone 8 bit samples. This is easy to do. When you want to support stereo, or even more channels, and formats where the byte order is important (e.g. 16 bit samples), it begins to suck. Also, you need to support a number of different interfaces if you wish your code to work across platforms. > Anyway, I just can learn that by reading source code, for example, quake'= s > sound engine runs on every game update, while sdl's and minifmod's run in= a > different thread. Updating the audio in the game loop seems to be easier = to > me, at least i have to write less platform independant code, but it seems > that you have that work already done. On the other hand threaded audio se= ems > to be the right thing, since most libraries do that. Threaded is easier. For _ultimate_ performance, you want to schedule yourself, in this case from the game loop. But, seriously, who needs that? Carmack might, because he writes to highend metal, but the rest of us? You loose SMP benefits, etc. YMMV. Mads --=20 Mads Bondo Dydensborg. ma...@ch... I disapprove of what you say, but I will defend to the death your right to say it. - Beatrice Hall [pseudonym: S.G. Tallentyre], 1907 (many times wrongfully attributed to Voltaire) |
From: Mickael P. <mpo...@ed...> - 2002-03-18 08:51:58
|
> During the lull between Candy Cruncher and our next product I'd like to > take a day and write our own audio mixer to minimize our dependencies on > per-platform audio mixing capabilities. We can survive without hardware > mixing (in fact, we don't support it right now at all because it's so > damn flaky). When I'm writing small demos, I'm generaly using something like FMOD or BASS that are two nice libraries that unfortunately exists only on PC... Well, all that to say that in the FMOD documentation the author explains that his tests shows that software mixing is generaly faster and of better quality than hardware mixing, due to various things like the fact that the drivers are crappy... > I've never done an audio mixer before, but I understand the general > theory of audio mixing. Anyone have any pointers, tips, gotchas before > I dive into it? Basicaly it's just a loop that makes the signed sum of values of interpolated samples... nothing to be mad about :) The first thing to consider, is that when you are doing software mixing you have virtually an unlimited number of voices. You can also perform reverberation pretty easily. Pausing the sound replay also became very easy. Among the problems I see, is the fact that you have to double bufferize to make it efficient. If you make the buffer too long, you will have too much delay in the sound replay. If you make it too short, you can suffer problems with frame rate variation (tip => make it in another thread). [Note: Sometimes instead of double buffering you can have ring buffers with read/write pointers.] You can also have problems with interpolation. I remember that numerous demo making sites talk about this (about sound tracker replays, and how to make it nice from adlib to Gus cards), and especialy all that is related to "clicks", looped samples, ... Mickael Pointier Eden Studios |