Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 25 Jul 2002
Offline Last Active Apr 08 2016 07:54 PM

#5280604 1 hr of Voice Acting translates into how much hrs of Voice Actor work?

Posted by on 10 March 2016 - 04:27 PM

By the way, I highly highly recommend this piece of kit 




It's a batch processor which takes a lot of guesswork levelling dialog.


I run it both before processing and mastering to get my VO to a consistent level across the session, and then once more after processing and mastering to ensure it adheres to the final dialog levels we wish to have in game.

#5272187 Which type of sounds you are looking for most ?

Posted by on 21 January 2016 - 03:29 PM

Haha.. the gekko - sounds either like a creak door, or a land dolphin..


I wonder if you slow it down it may sound like choirs singing...

#5270095 Do you ever hold back on music

Posted by on 08 January 2016 - 11:17 AM

I'm wondering why you would want to hold it back in the first place. Did you sign away all rights to the music in the contract?


I've written some pretty good tunes in my time for clients but never held them back - I didn't have any reason to. Part of the business is learning to let go, and also to negotiate better up front.


Like CCH says : Unless contractually specified it's up to you to put the piece forwards for review / acceptance or not. Think of it this way, even if the game is low profile, would you not want it out there published as an example of your skill - part of your portfolio of what you can do for games. You can think of it as an investment rather than a loss.

#5259305 Which to learn first: Wwise or FMOD?

Posted by on 27 October 2015 - 12:37 PM

Agreed with Brian.


It's the concepts you want to understand. The middleware provides the way to implement those concepts.


It's like saying - should I learn ProTools or Nuendo first?


Learn signal flow, basic concepts of what a DAW does, understand audio theory and mixing concepts.. then apply those to the tools.


I'm going to be difficult, and say - learn both at the same time!


If you can, try to do tutorials in implementation for the same material in both tool sets so you can get a good understanding of the differences.

#5254342 Where can I find high quality reference tracks for free?

Posted by on 28 September 2015 - 01:51 AM

One of the things you can help is to imrpove your analytical listening. Active listening involves training your ear to hone in on elements in a song. Taking a whole mix and deconstructing it so you can analyze how it was constructed in the mix.


Reference tracks are songs that are mixed well in the genre you are mixing and using them as a tool to help re-create the same EQ, reverb, panning and volume for the parts. Then you can A/B your mix with the reference track and see how they compare.


Draw a square on a piece of paper. Listen to a song you're using as a reference.


Within that square, draw what instruments and where they are in the stereo field, how far back they are in the mix - volume, placement with wet reverb.. etc - how you percieve it.

Identify processing used - chorus, flange, echo, reverb.. eq.


You should be able to construct a fairly good graphic representation of a mix. This is the blue print you can use to apply to your own song.


These are actual exercises we had to do in an audio engineering course. We also learned how to hear individual frequencies on a 20 band EQ. Our lecturer used to test us twice a week by boosting or ducking one of the bands. 


Mixing is like playing an instrument, you don't just do it, you need to learn the basics first and then practice listening to other's play to try to emulate, and practice playing to improve. A lot of this skill is practice, and learning how to hear things in the music and then understanding how your processing and eqing will affect that mix.


There's no specific 'reference track' library persay, just songs that you aspire to mix as well as.

#5217010 Mastering Seperate 'Adaptive Audio' Tracks

Posted by on 16 March 2015 - 10:43 PM

Hey Matt, 


This sounds fascinating, you can certainly try to prototype this in wwise. 


Experiment with 3D atttenuation graphs.


Try using a music bus and running some DSP (wwise supports things like compressors and side chain compressors / ducking, eq... etc) over it. There's some 3rd party vendors of some fairly good DSP out there as well.


It really depends what you're trying to do with the objects, how many other tracks play, do they fade out to silence, or just duck a bit to focus on the other object. Can you within a certain radius fade over to the produced track mixed fully. Lots of different things to try. Some you'll need engineering support and a way for you to tune the results, most you can try to protoype in the audio authoring middleware.

#5216204 Good articles or tutorials on mixing?

Posted by on 12 March 2015 - 09:12 PM

Mixing isn't about setting all the dials and leaving them where they are for that perfect mix, things move, you need to control things a little and move them around so they get focus or defocused in the mix. The final mix may sound more static than that, but that is the art - to produce something that is glued together well.


The excercises we were given in audio engineering were how to not only disect pieces of music but also training our ears to identify frequencies. You can think of these excercises as practicing your scales and techniques so when you play your instrument, your fingers go to the right places at the right pressures to create the piece of music. So to must you train your ears and mind to understand and hear things - once you really start analyzing things you will not only learn from other people's mixes, but also identify areas in your own mixes where you can improve.



A good way to learn how to mix is to study a similar track to one you are trying to create - break it down into a square as a cross section of a cube from the top down. Closest face to you is front of the mix, left, right. 


From this, analyze your reference track and figure out 

1. Positioning of instrument in stereo field. (L/R)

2. Tone of the individual instrument (color) - bright, dull, thin, thick.

3. Loudness of each instrument.

4. Depth of the instrument (wet/dry reverb)

5. Draw directional arrows when things move around in the mix.


You can then take this visual diagram and start to apply it to the similar piece of music you are trying to mix. 


Practice using a 24band EQ on your favorite piece of music. Listen to those - boost them by 3 or 6db - Have someone else do this or automate in your DAW and see if you can hear those frequencies correctly. Training your ears to hear frequencies really helps with mixing and pinpointing issues and analyzing other pieces of music.


There's a great book out there called Mixing With Your Mind which has some fantastic easy to understand analogies and techniques for mixing.


Most of the tools - compression, eq, reverb to shape your instruments into the mix will just be a lot of practice. Took me many many years to fully understand how various compressors worked and how to use those to shape things like snares, kicks, hats..etc. I'm still learning, and you always keep learning - mixing is definitely one of those life long things and as you practice you will learn how to mix better over time. I have been mixing now for over 10 years and still learning, practicing!

#5211257 Chiptune softwares

Posted by on 17 February 2015 - 02:01 PM

ReFX also has QuadraSID - C64 based sound generator.


I usually write my chip tunes, by creating the samples myself - basic wavform generation, and then use trackers to write the music - openmodplug or renoise.

#5211094 Can you detect any difference between these sound effects?

Posted by on 16 February 2015 - 07:50 PM

To some degree the mics will color the sound a little - like in this example, the PCM-D50 recorder is quite sparkly it grabs a lot of high end. The Sennheiser picks up a little more detail and the room noise. But you can see from the recordings, mic choices produce slighly different results.


What is the most most important thing when recording foley is to collect your sounds so the raw material is easy to work with.

1. Clean noiseless sounds - creating an as silent as possible environment - you can always EQ or process the sounds to fit together better but removing background sounds is difficult.

2. Reverbless sounds (unless it is a creative choice or you need the sound in a particular acoustic setting where you need the natural room reverb sound) - you can always add reverb to some degree - but removing it is difficult

3. Collecting different perspectives - up close and slighly furhter away sounds will record differently. Up close there's such a thing called Proximity effect"


So I wouldnt' focus so much on what microphone sounds best - but what microphone will work for you in your recording environment to get you the sounds you need at the best quality that you can record at - hence rent your equipment before buying it and experiment with a few different mics.


If you go with a few different types of mics at different distances - using your ears to place them during your recording session - you will get a decent set of sounds to work with. If you have one mic, at one distance - you will have to struggle with multiple recordings, more editing time, more recording time - and less variation in your recordings meaning - more post processing to make things sound similar in the same world. $30 for an extra mic per day, or $??? / hr to edit, process, record a smaller amount of workable material.


All sounds you gather with a mic are never final assets that get added to a game / cutscene. You will always need to process the sounds - clean them up, fit them to what is happening visually through trimming, fading, cleaning, eqing.


I have never used a recording just as is - even footsteps and perhaps this is what you're worried about trying to find the right mic that sounds fantastic right off the go?

#5210605 Anyone have tips on doing foley production?

Posted by on 14 February 2015 - 12:10 AM

Here's a quick mic comparison with the mics I own. 


This was a glass of water poured, tapped with the other class, and flicked.


Recorded in a bedroom - with a PC about 4 feet away - pillows blocking it as best I could.




Sennheiser MKH-60


And for Kicks - a portable

Sony PCM-D50 stereo - native mics in XY pattern - but also set up as 96khz 24bit



All mics were roughly same distance, same angle, and using exactly the same gain setting on the input - no mic-preamp except what is in the M-Audio Delta 66 Omni Studio box I have (very very old sound card). Recorded at 48khz 16bit.


Unfortunately one of my mic inputs just died - so I had to re-record each individually. Whilst not the same sound exactly, you can gain a little about each mic.


Of course if the room was silent, the MKH and Rode would have performed better.

#5210341 Anyone have tips on doing foley production?

Posted by on 12 February 2015 - 02:52 PM

Ya know, if you're just doing this one time - you can also check out your local equipment rental places.


A quick search yields a 416 mic for $85 for 7 days rental. -- ADDENDUM : Another search - $30 PER DAY.



You'll need a recorder with a mic pre-amp - phantom power - also rentable.


To get the most out of your session, I'd rent two different mics - mic one at a slight distance and one closer to let you choose what perspective you'll use - or mix.


For water you will need to use something a little aways from the water hence the shotgun mic. You don't want to get your mic's wet! If you can find dry condoms (they do exist but probably by special order) or just take a condom and wash it out and make sure there's absolutely nothing in it - you can cover your mic. The membrane is thin enough to let sound through while keeping the mic dry from little drops. Even better you can put a pop filter infront of the mic to stop the liquid really hitting the condom membrane hard and making little pop noises.


If you are hitting things, turn the gain down on the preamp so you don't clip the close mic. This way you'll get a good amount of detail from the ring out, and you can always edit/fade in the transient or blend with the distant mic.


For creating a space that will deaden the room a bit, you can use cardboard boxes or some kind of frame and drape a very large feather duvet over it. Line reflective areas with pillows or hang a thick curtain to help remove some of the higher freuqencies.


As for recording distances, use your ears to find the sweetspot for each channel. This is also why you want 2 sets of mics with different distances so you can get better recordings and not waste so much time recording 2 takes close and distant and that doesn't yield the same same sound.


You may also want to put down plastic on the floor if working with water. Then get some towels - and lay them down too to soak up extra splash and prevent plastic feet / splash sounds interfering with your recording.


A good book to buy is Mixing With Your Mind - talks about mic placement amongs other things in an easy to understand fashion. We were recommended this book when I went through my audio engineering course.

#5209685 Looking for critique on my new RPG forest theme...

Posted by on 09 February 2015 - 04:07 PM

Great job.


In my opinion, it is a little on the busy side, but this is mainly due to perception from the mix and speed of the song.




Use more expression, volume envelopes to control the parts they will sound less static. I went through this myself when I was learning to compose orchestral / live instrument pieces.


Let your wind instruments take more pronounced breaks - people will feel less claustrophobic / out of breath while listening. This goes for brass and anything else which requires breathing. It's a natural part of music. If you need to continue on - perhaps use two different flutes - pan them a litle and let them play question response which will also add another dynamic to your music.


Perhaps slowing the theme down a little will help give it a little more space so it doesn't feel like everything is clambering to be heard.


Bring down the tamborine a little, - give it also a few more breaks, and use of swells and dips to bring it up and down to accentuate the parts of the music you need to give emotion to.


Finally, the pizzicato plugs, hi-pass eq them - and drop them a little in volume, right now they feel huge in comparison to the flutes, and they are only a supporting instrument.


The main instruments are your piccolo/flute and oboe so let them have the limelight a little more.

#5208939 How do you learn to compose different genres?

Posted by on 05 February 2015 - 03:41 PM

I wanted to add mimicing and what I mean by studying for me doesn't only mean listening over and over but actually taking a piece and trying to replicate 20 - 60 seconds of it note for note, sound for sound. Listening only does so much, and you learn so much more by doing as its a great practical way to program your mind to really start learning something new.


By replicating you can learn to mimic and then personalize:

  • Melodic structure / counter melody / harmony / phrasing / tempo
  • Sonic Identity - choice of instruments and producing the tones that work together to make a particular song's style.
  • Mixing technique - how to put everything together in a mix so the piece is a final product. Stereo field placement, depth and how everything is glued together.

For example, when I first started trying to write orchestral music (I am not at all classically or orchestrally trained) - I first started looking at where the sections were in the physical space - understanding how an orchestra layout affects how you mix. At the time I didn't have any fancy pre-mixed orchestral libraries, just a general midi synth and so I set up my individual instruments in a virtual space reverb plugin. I also learned the instrument ranges so I was staying true to how they are played. Once I got that particular sound, writing a melody initially started out as me using all the instruments, but then reworking the song, breaking sections out to add counter melodies or supporting harmonies.




I took this further by creating a small library for GBA - using only 8 polyphony at a time and started practicing writing with it trying to write small demo tracks. This paid off as I started getting work.


  • First 3-4 songs which are from Dragonball Z - Legacy Of Goku II (I had to transcribe by ear the show's composer's music while adding a little of my own flair and technique to it). Mimicing / transcribing around an hour of that music for the project really imprinted some composition techniques / styles.
  • Midway you'll hear 'Simpsons' theme that isn't the Simpsons theme, however the excercise was to make a sound-a-like by mimicing style, tempo and the other idiosyncracies that made up the theme.
  • The final piece now I think of it has original elements, but some Louis Theme from superman gave me inspiration.


Using this same trick of mimicing - I had to learn Cartoon Network styled music for a cartoon network game. Note, I had never written anything like this before - but the same stylistic / orchestration tricks apply. 



Studying the Serenity Theme, I took this further and tried writing something that had some of the stylistic idiosyncracies - Space Western. Also never written anything like this before - and is probably one of my best orchestral works to date.




Then - for an iOs title, I was asked to rewrite a ragtime piano piece they had licensed and make it Elfmaneseque  - BeetleJuice / World of Goo trailer style. By using the techniques of breaking down the music into it's components (tempo / phrasing / orchestration / mixing / and melodic content) - I was able to hit what they were looking for.


#5207714 What's the best way to learn audio middleware? Wwise, FMOD...

Posted by on 30 January 2015 - 11:54 AM

Simon raises a good point - Audiokinetic does have a WWise certification course which will help guide you through the basics of how to use wwise. It's very much worth your while to go through this.
http://www.soundlibrarian.com/fmod-suite.html also provides a course who has worked with Fmod to develop this content.
In a nutshell, audio middleware takes all the common repetitive tasks of authoring, managing and communicating how audio responds to a game. Actually it's quite similar to a game engine which handles playing animations, and moving 3D objects around, providing lighting and authoring landscapes. Different game engines have different ways and tools to address these things, and such is the concept of the audio middleare.

At a basic level - different middleware packages provide ways of authoring how sound responds to game events. Each has their different strengths so understanding the basic concepts behind them will allow you to adapt to each when necessary and then just learn the interface.

Here's an example of a common basic concept middleware helps address.

1. Sound To Game Event is no longer a 1:1 ratio.

Lets use for example, a bullet impact.

Simplest way:
- Play Sound: Bullet_impact
- gets very repeititive and un-realistic

Expanding on this to add randomization:
- Play Sound from a pool of Bullet_impact_random (bullet_impact_01, bullet_impact_02, bullet_impact_03, bullet_impact_04 )
- more realistic but start to hear variation after a while and could potentitally repeat bullet_impact_01, bullet_impact_01, bullet_impact_02, bullet_impact_01, bullet_impact_03 ...

Expanding on this to add rules:
- Play Sound from a pool of Bullet_impact_random
- But don't play the last 2 sounds played
- bullet_impact_01, bullet_impact_03, bullet_impact_04, bullet_impact_01 (is ok because it's not in hte last 2), bullet_impact_02, bullet_impact_03 ...
- This is more believable but we can still improve..

Add extra behavior
- volume randomization (+/- 2dB)
- add pitch randomization (+/- 300 cents) : you expand the pool somewhat
- add random hf filtering : some bullet impacts are sharp, some are duller sounding.

Now, lets be really clever - lets cut all these files into a transient (surface impact) and the tail (ricochet sound) and rebuild them in real time.
- Impact_01 + Tail_02, Impact_01 + Tail_03, Impact_03 + Tail_01
- Add all the other behaviors to each of these including randomization of pitch, volume and filtering.
- You can now see that you have a huge pool of sounds from 4 simple files.

Then lets give this a useable context by allocating them to a surface type:
- Each set of bullet sounds can be allocated to a choice structure
How would the programmer call these? Well, you could name you audio cues using wwise as an example: Play_bullet_impact_metal, Play_bullet_impact_dirt, Play_bullet_impact_concrete, Play_bullet_impact_wood and ask the programmer to call the appropriate sound cue by sufrace type.
Or you can set these up with a parameter that the prorammer can pass in : Metal = 0, Dirt = 1, Concrete = 2, Wood = 3
Play_bullet_impact (2) which would trigger the conrete sound. The surface type can be allocated by artists in their textures and so the programmer would need to obtain the texture number and pass it to the sound engine.

Then finally, you may need to allocate a volume attenuation to these sounds.
- Maxium volume for 1 meter, then roll off logarithmically to 30 meters

In the past to achieve this kind of complex sound design behavior
1. you'd need a larger memory pool than 4 sounds to do it by hand. So there's a limitation on the permutations you can build
2. You'd also need engineering time to have the audio code allow you to author these behaviors and also do other complex things. Middleware helps the audio designer do most of this themselves without needing huge amounts of engineering resources. This allows us to be more creative and have instant feedback on our design choices.

So you can see from this example, the way that the game now plays audio and how they are implemented heavily influences how you design your sounds, and vice versa how the game sounds depends heavily on how they are implemented.

I learned these middleware tools through taking components of games : Footsteps, Ambiance, Impacts, Explosions, Adaptive Music... and figured out how I could implement different parts of game audio and the pros and cons to those approaches. There's a lot of video content and documents, GDC talks which address how game audio can be implemented.

Both FMOD and WWise allow you to test your sound implementation with the behaviors you have authored so you can hear how they may 'feel' in the game environemnt.

Hopefully from this very very brief description you can further understand why audio middleware knowledge is necessary as an in-house sound designer.

There is a large misconception that working in house as a sound designer just means you make cool sounds all day, that is just one of the functions of the job. Contract sound designers will tend to funnel wav files over - but even now these middleware authoring tools allow you to contribute to how the sound is supposed to play in the game by authoring the behavior yourself.

#5207534 Education vs Industry Experience

Posted by on 29 January 2015 - 03:11 PM

So I see that you already have what is commonly called a diploma of Sound Production or Audio Engineering.


From your demo, you have grasped the concepts of implementation, sound design, mixing.


I'd say go for the job, and try to complete the degree portion of your course as a part time. Don't give up a good opportunity. What you will learn on the job may be far more valuable than sitting in academia for another year and hoping the same opportunity arrises again.


I hold a diploma in audio engineering, but I also have a software engineering degree and 8 years of software industry experience, so for me game implementation side of things is easy to learn and pickup from attending GDCs, watching and reading media.


My job prospects have been through past job experience combined with self learning. Do try to complete your degree portion if you feel the skills you will learn there are beneficial to you or even just for your own feather in the cap.