Jump to content
  • Advertisement
Sign in to follow this  

What's the best way to learn audio middleware? Wwise, FMOD...

This topic is 1238 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts


I'm a composer, sound designer and voice actor with a focus on audio for games. I've been doin this full-time for about 8 months now and have done some small projects.

I've been applying for in-house jobs and have had one interview. However I did not get the job, citing experience as the mitigating factor. Particularly I have found that I am found wanting in the audio middleware aspect of these job adverts.

I have had small, cursory glances at Wwise and FMOD but not really delved into them yet. I've also done a few audio implementation tutorial videos through Unity.

My question is, for those of you with experience of Wwise and FMOD what is the best way to learn how to use them? Any books, websites, videos you'd recommend. I've seen courses offering tuition on them, but they tend to be pretty expensive, enough to consider the self-taught approach...


Matthew Dear

Share this post

Link to post
Share on other sites

If you're an experienced sound designer I wouldn't bother with the classes. 90% of what you would learn is redundant. I would recommend grabbing a sample project and then just spend as much time as it takes for you to become comfortable with the software. Unity and Fmod work great together and there are a ton of resources available for each to get started.

Share this post

Link to post
Share on other sites

Hi Matthew, 


The Wwise Certification 101 course is free and well made to learn the core workflow in Wwise. You may want to take a look at it. For a more advanced sample project, the Wwise Project Adventure is a great tool (handbook and corresponding Wwise Project). Also free and available from the Installer. 





Share this post

Link to post
Share on other sites

The best imo is to use it on a project you make.


You can use Unity, the problem being that you need to have the pro version to have Wwise or Fmod work on it (and their basic audio system is really bad). And the pro version cost a lot of money if you are a student without salary.


You can use UE4 which is pretty cheap, their basic sound system is way better too or you can use Wwise with it (not sure about Fmod) but it's harder to learn.

Edited by Valoon

Share this post

Link to post
Share on other sites
Simon raises a good point - Audiokinetic does have a WWise certification course which will help guide you through the basics of how to use wwise. It's very much worth your while to go through this.
http://www.soundlibrarian.com/fmod-suite.html also provides a course who has worked with Fmod to develop this content.
In a nutshell, audio middleware takes all the common repetitive tasks of authoring, managing and communicating how audio responds to a game. Actually it's quite similar to a game engine which handles playing animations, and moving 3D objects around, providing lighting and authoring landscapes. Different game engines have different ways and tools to address these things, and such is the concept of the audio middleare.

At a basic level - different middleware packages provide ways of authoring how sound responds to game events. Each has their different strengths so understanding the basic concepts behind them will allow you to adapt to each when necessary and then just learn the interface.

Here's an example of a common basic concept middleware helps address.

1. Sound To Game Event is no longer a 1:1 ratio.

Lets use for example, a bullet impact.

Simplest way:
- Play Sound: Bullet_impact
- gets very repeititive and un-realistic

Expanding on this to add randomization:
- Play Sound from a pool of Bullet_impact_random (bullet_impact_01, bullet_impact_02, bullet_impact_03, bullet_impact_04 )
- more realistic but start to hear variation after a while and could potentitally repeat bullet_impact_01, bullet_impact_01, bullet_impact_02, bullet_impact_01, bullet_impact_03 ...

Expanding on this to add rules:
- Play Sound from a pool of Bullet_impact_random
- But don't play the last 2 sounds played
- bullet_impact_01, bullet_impact_03, bullet_impact_04, bullet_impact_01 (is ok because it's not in hte last 2), bullet_impact_02, bullet_impact_03 ...
- This is more believable but we can still improve..

Add extra behavior
- volume randomization (+/- 2dB)
- add pitch randomization (+/- 300 cents) : you expand the pool somewhat
- add random hf filtering : some bullet impacts are sharp, some are duller sounding.

Now, lets be really clever - lets cut all these files into a transient (surface impact) and the tail (ricochet sound) and rebuild them in real time.
- Impact_01 + Tail_02, Impact_01 + Tail_03, Impact_03 + Tail_01
- Add all the other behaviors to each of these including randomization of pitch, volume and filtering.
- You can now see that you have a huge pool of sounds from 4 simple files.

Then lets give this a useable context by allocating them to a surface type:
- Each set of bullet sounds can be allocated to a choice structure
How would the programmer call these? Well, you could name you audio cues using wwise as an example: Play_bullet_impact_metal, Play_bullet_impact_dirt, Play_bullet_impact_concrete, Play_bullet_impact_wood and ask the programmer to call the appropriate sound cue by sufrace type.
Or you can set these up with a parameter that the prorammer can pass in : Metal = 0, Dirt = 1, Concrete = 2, Wood = 3
Play_bullet_impact (2) which would trigger the conrete sound. The surface type can be allocated by artists in their textures and so the programmer would need to obtain the texture number and pass it to the sound engine.

Then finally, you may need to allocate a volume attenuation to these sounds.
- Maxium volume for 1 meter, then roll off logarithmically to 30 meters

In the past to achieve this kind of complex sound design behavior
1. you'd need a larger memory pool than 4 sounds to do it by hand. So there's a limitation on the permutations you can build
2. You'd also need engineering time to have the audio code allow you to author these behaviors and also do other complex things. Middleware helps the audio designer do most of this themselves without needing huge amounts of engineering resources. This allows us to be more creative and have instant feedback on our design choices.

So you can see from this example, the way that the game now plays audio and how they are implemented heavily influences how you design your sounds, and vice versa how the game sounds depends heavily on how they are implemented.

I learned these middleware tools through taking components of games : Footsteps, Ambiance, Impacts, Explosions, Adaptive Music... and figured out how I could implement different parts of game audio and the pros and cons to those approaches. There's a lot of video content and documents, GDC talks which address how game audio can be implemented.

Both FMOD and WWise allow you to test your sound implementation with the behaviors you have authored so you can hear how they may 'feel' in the game environemnt.

Hopefully from this very very brief description you can further understand why audio middleware knowledge is necessary as an in-house sound designer.

There is a large misconception that working in house as a sound designer just means you make cool sounds all day, that is just one of the functions of the job. Contract sound designers will tend to funnel wav files over - but even now these middleware authoring tools allow you to contribute to how the sound is supposed to play in the game by authoring the behavior yourself. Edited by GroovyOne

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!