Sign in to follow this  

Code Design: Audio Programming

This topic is 2846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Well, some may have recognized my post earlier today commenting how the GameDev.net forum should have a topic specifically for audio programming. Since that does not exist at this time I will need to get by using General Programming as a sub-topic for now. I considered Software Engineering since this is a design related question, but figured here was just as well. I am planning to make a flexible sound system interface, which I can program the different implementations later. I have created a (simple) sound system in the past with DirectX; supporting WAV and OGG - buffered or streaming. I liked the design I had but it certainly wouldn't be able to use OpenAL, FMOD, XACT or any other libraries out there. My plan is to do something like this: /* A single point of access for; loading and messing with audio. */ class CAudioSystem; class CAudioSystemDirectSound : public CAudioSystem; class CAudioSystemOpenAL : public CAudioSystem; //You get the idea... /* This is used by a CAudioSystem to play sound data. */ class CAudioData; //Then the direct sub-classes would look like this: class CAudioBuffer : public CAudioData; class CAudioStream : public CAudioData; //Grand-children, which converts data from a file into PCM data which pretty much any sound library uses / is compatible with. (I do believe) class CAudioBufferWAV : public CAudioBuffer; class CAudioStreamWAV : public CAudioStream; class CAudioBufferOGG : public CAudioBuffer; class CAudioStreamOGG : public CAudioStream; //You get the idea... /* The audio contoller can Play, Stop, Control Volume, Pitch etc. */ class CAudioController; //Subclasses could be along the lines of: class CAudioController3D, which handles 3D sound. [quote=self] Note: I've also thought of adding a small interface to manipulate; decompress and get PCM data. I don't really like the idea, but it may have less code duplication, which is generally the better idea. class CAudioDataManip; With subclasses: class CAudioDataManipOGG : public CAudioDataManip; class CAudioDataManipWAV : public CAudioDataManip; I think this is a, slightly, cleaner design since BufferOGG and StreamOGG would likely duplicate code on some level or another - IF the abstract CAudioData class did not have a proper setup. I would like to eliminate this Manipulator class, but to do so properly requires correct implementation of the base. [/quote] ------------------- Okay, so my limited knowledge of audio programming is probably blaring through the design of this system. From the reading I've done it may not be possible to have the CSoundController separated from the CSoundData like I want - though from design point of view I think the above is fairly flexible and easy to add additional support for different systems and file formats by implementing just acouple interfaces. I may do a three part sound article; as somewhat suggested by my other thread. First article talking about a basic audio system, which will play sound effects. The following article would be about streaming music (could be combined into 1 if short enough). Then the next article would go a bit deeper with positional audio in a three-dimensional world. Of course this stuff is not quite advancing game audio as we know it, but it is a starting point.

Share this post


Link to post
Share on other sites
You are right that there isn't a whole lot of literature or tutorials for sound programming that go too far beyond just playing sounds and streaming music. One possible reason is because it is extremely game-specific. If you really do write some tutorials you should cover the following:

- the sound system should know the location of the player for positional audio, distance attenuation, or to not even play if a sound event occurs too far from the player.

- the sound system should use the spatial partitioning of the game engine to optimize audio. For example, in an RTS 500-1000 units may be strewn across the map. You don't want to do Vector3.Distance() to the player for every single unit to see if it is within audio range of the player.

- there should be some way to limit the number of instances of a sound effect. If 50 things all explode at once, you don't want to play 50 instances of the same sound.

- there should be an easy way for designers to associate things with specific sounds. For example game objects (units, weapons, etc.), materials (footsteps, impacts, etc.), events (collisions), and spatial partition zones (environmental effects).

- there should be some form of randomization or variation of a specific sound, so every gun shot doesn't sound exactly the same. It can be automated or multiple sound effects could be associated with a type of sound event.

That's all absolute basics for a decently usable system. Advanced stuff would be dynamic orchestration and lip-sych animation.

Share this post


Link to post
Share on other sites

This topic is 2846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this