Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualMoritz P.G. Katz

Posted 21 March 2013 - 05:34 PM

Hey Ollie,

I'm looking for any help regarding how the acoustic behaviors of sound in the real world are emulated in 3D game environments.  How much of an understanding of reverb is required when designing sound for games?  Does a sound designer ever consider the inverse square law when creating attenuation?

As specialized as game audio may seem, I think people still have very varying focuses.
No doubt there are people whose job it is to consider acoustic details, outlining a realistic audio environment, but that's probably more in the hands of audio programmers. Most people doing the creative work are using tools already fit for the job, especially nowadays: from algorithmic and impulse response reverb AUs/VSTs/RTASs over audio middleware like FMOD or Wwise to dedicated audio engines.

And while I've studied that stuff for a few semesters ("systematic musicology" is what they call it over here), rare is the case where I have to pull out the old calculator to make things sound the way I want them to sound, and I'm very very glad about that.
On the other hand, knowing some basic acoustics never hurt either, even if it's just to set up a proper monitoring environment or to get a good starting point when choosing and positioning microphones in the studio.

But I can only speak for myself, really. Hope it helped though?

Cheers,
Moritz

#1Moritz P.G. Katz

Posted 21 March 2013 - 05:32 PM

Hey Ollie,

I'm looking for any help regarding how the acoustic behaviors of sound in the real world are emulated in 3D game environments.  How much of an understanding of reverb is required when designing sound for games?  Does a sound designer ever consider the inverse square law when creating attenuation?

As specialized as game audio may seem, I think people still have very varying focuses.
No doubt there are people whose job it is to consider acoustic details, but that's probably more in the hands of audio programmers. Most people doing the creative work are using tools already fit for the job, especially nowadays: from algorithmic and impulse response reverb AUs/VSTs/RTASs over audio middleware like FMOD or Wwise to dedicated audio engines.

And while I've studied that stuff for a few semesters ("systematic musicology" is what they call it over here), rare is the case where I have to pull out the old calculator to make things sound the way I want them to sound, and I'm very very glad about that.
On the other hand, knowing some basic acoustics never hurt either, even if it's just to set up a proper monitoring environment or to get a good starting point when choosing and positioning microphones in the studio.

But I can only speak for myself, really. Hope it helped though?

Cheers,
Moritz

PARTNERS