Jump to content
  • Advertisement
Jiraya

Gameplay Is audio always frame rate independent?

Recommended Posts

Posted (edited)

I'm currently learning XAudio2, and so far I think I've got a general idea of the basics (load sound to buffers, send buffers using a source voice to a mastering voice, and so on)

So I was wondering, is audio something that needs to be in sync with the game's frame rate? I decided to watch some youtube videos of games being played at very low and very high frame rates, and all music and sound effects were played the same. I believe this is not surprising, otherwise the audio would be played at a variable speed depending of the frame rate, and that would ruin the experience completely.

But still, is there any situation that audio needs to take into account the game's FPS? (rhythm games, lip sync perhaps?)

Edited by Jiraya

Share this post


Link to post
Share on other sites
Advertisement

Perhaps positional audio, as if audio position is being influenced by the position of a model or physics object then that is indirectly affected by FPS. That's the only one that comes to mind for me right now as a lot of sound processing is passed off to drivers that run at close to realtime scheduling. If the FPS drops too low however you might find that a game can't buffer audio fast enough and you get stuttering.

Share this post


Link to post
Share on other sites

I see, now I understand why audio sometimes stutters in some games :D

Thanks guys!

Share this post


Link to post
Share on other sites
7 hours ago, Hodgman said:

If your game runs at a slower framerate, your "audio buffer" just needs to be longer. As above, if audio is at 48kHz and the game at 60Hz, then the buffer needs to be at least 800 audio samples in length. If the game is running at 30Hz, the audio buffer needs to be at least 1600 audio samples in length. In practice, most game audio products will just use some conservative buffer length like 5000 samples by default to be safe.

Or it can be running in a different thread. The audio often has multiple tiles to this audio buffer, and you fill one or several as they become free, perhaps in a callback. If you don't fill them on time you might get audio glitches as the audio plays tiles that contain old data. The size and number of tiles can affect the audio latency (the gap between playing a sound and hearing it). With a small buffer you can get small latency but you need to ensure it is filled 'on time'. With a larger buffer there is less need to keep it filled in a timely fashion.

If the audio is running on a different thread, the effect of frame rate may be rather to affect the 'granularity' of sound effects playing in the game, unless the audio wrapper and system specifically compensates for this. Typically a game might issue a command like PlaySound(GUNSHOT) .. if it is running at 1fps, a bunch of these may be issued at the same time rather than spread out over the second.

9 hours ago, Jiraya said:

But still, is there any situation that audio needs to take into account the game's FPS? (rhythm games, lip sync perhaps?)

More likely to be the other way around if anything. If you can query your audio how far through it is, you know how far to advance your game. However with an accurate general timer this is less likely to be an issue, as most games are designed to be frame rate independent, i.e. they use their general timer to know how far to advance the game. The longer the audio the more the possibility of drift between general purpose timer and audio player rate, and afaik different sample players are not exact. This is more likely to be an issue in audio / music apps than in games though.

Also note that you could in theory change the rate that audio plays, but this normally gives a change in pitch, which might be noticeable, especially with a varying frame rate. You can also do time stretching on audio, to shift the play rate without changing pitch, but that would probably be a very messy solution to this particular problem.

Share this post


Link to post
Share on other sites
15 hours ago, Jiraya said:

But still, is there any situation that audio needs to take into account the game's FPS? (rhythm games, lip sync perhaps?)

For those types of games it's generally done the other way around - the graphics are positioned based on timing information from the audio. That allows you to just render as fast as possible, and still keep the graphics in sync with the audio.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!