In Part 1 of “Improving Communication With Your Sound Designer”, we explored the tools of the trade,
the processes of production and ways to talk tech with your sound designer/composer. This article will get more into ways to discuss creative concepts and changes based upon Audio Tools,
Samples/Instruments and Creative Structures.
Anyone who knows the SomaTone Interactive Audio team knows how dedicated we are to good communication with our clients and team members. Our company culture is built on the belief that valuable,
consistent & effective communication creates outstanding creative products and relationships. Since we are a team of creative designers (taking our clients vision and turning them into sounds),
it is imperative for us to have sophisticated communication tools. (I personally am a self proclaimed communication junkie. Having spent 6 years training with leadership/communication training
experts Landmark Education, 2 years certifying as a Neuro Linguistic Practitioner and 2 years in Laban Movement Analysis Training, I am always seeking better ways to communicate abstract ideas).
Having refined a toolbox of useful communication techniques for the creative industry, I will share a few valuable ones we use often with our clients and each other to provide the best audio we
We have organized all Creative Conversations into one of 4 categories:
- Tool Based: Discussing creative design that would take place at a tool based level: i.e. EQ, Effects, Volume, Pan, etc
- Sample/Instrument Based: Discussing creative design that would take place at the sample (or instrument) level: i.e. – sound effect recordings, instrument choices, tuning, etc.
- Structural Based: Discussing creative design that would take place at the organizational level: i.e. – structure of song, layers of sound effects, layering of the instruments,phrasing, etc
- Concept Based: Discussing creative design that takes place at the concept level: over all vision, textures of sound effects, textures of music, abstract discussions of creative vision,etc. (This will be covered in a future article as it is a big topic and one worth exploring in rich detail).
We often find Producers at differing levels of adeptness in discussing the different types of conversations that all make up the final creative product: Tool, Sample, Structure, Concept (TSSC).
However, most Producers seam to believe that they are supposed to provide the concept and the audio professional provides the rest. Now, that is fine when you are so in sync with you audio
professional that you can just say “Make it breathe!” and s/he comes up with exactly what you were thinking about. This is rare, and mainly because we all have our own individual maps of
the world including what it means to make something “Breathe”.
Let me give you an example:
Here are some of the funnier comments we have heard over the years of Producers trying to explain their vision for a sound:
“Make it sound like it is going home.”
“There is a fine line between fire and cheese.”
“Could it sound… …Well… …different?”
“It’s gotta have more, um, ya know, and less, well, ya know what I mean?”
“We want this sound to be like ‘Ahhhhhh!’. But no one has to scream, in fact it is not a voice, it should be more like a stone."
And our favorite: “I know what I don’t like” (sometimes this is useful but needs to be supported by knowing what you do like)
We will protect the innocent by not naming any names!
So let’s take a look at some of the ways we can break down the Audio Design Communication Tool Box known as TSSC
The tools all audio designers use, to some degree, are similar and in many ways variations of each other. We will focus on the most widely used and the most likely necessary for you to know about
to help make valuable suggestions.
Just like you adjust the knobs or sliders in your car, so can the audio designer adjust the EQ on their system. EQ is short for Equalizer (although no one seems to use that term anymore) and it is
an adjustment of the frequencies in the full audio/hearing spectrum. EQ can both be adjusted to the final mix (what audio engineers call the 2-track, mix down, final print, the bounce, etc), and to
the individual sounds/instruments. Although EQ can get detailed and complex, we will divide up the EQ spectrum into 3 parts: Low End, Mid Range and High End. If you just know those terms, in can make
a world of difference in communicating about the EQ changes:
- Low End – as the name suggests is the part of the audio spectrum (on either an individual instrument or final mix) that deals with all the low thuds: i.e. kick drum, bass, low orchestrastrings (bass), timpani, etc. It is important to note that each instrument might have representation in the different audio spectrums: i.e. – a kick drum’s thud is in the low end, whilethe snap of the kick on the head is in the mid range spectrum. To communicate effectively with you audio designer, it is not important for you to know the specifics of where everything is in the EQspectrum (that’s what we learn for years and years!), but just the general areas (which are usually intuitive).
- Mid Range – again as the name suggest this is the middle of the EQ range usually containing pianos, guitars, violas, clarinets, etc. (remember that bass and other instruments will have somerepresentation here, but will be more heavily represented in other areas of the EQ spectrum). This is the area that can get the most “muddy”. If a mix sounds too cluttered, or heavy, toomany instruments might be representing themselves in the mid range.
- High End – Also as the name suggests, this is where all the high frequencies sit: i.e. – flutes, piccolos, high octaves on any instrument (the high keys on a piano, the high stringson a guitar), etc. If an instrument or mix is not bright enough (or too bright), this might be an area of the EQ spectrum to address.
Common areas that often come up in feedback:
- Brightness – Perhaps an instrument needs to be brightened up to hear it better, or the overall mix needs to be brighter so that it seams “clearer” (another word often associatedwith the high end of the EQ spectrum)
- Muddy – Perhaps there is too much going on in the mid range of an instrument or final mix and it sounds “bulgy” or “fat”. If something feels like it is taking up toomuch space in the music sonically, it is often in the mid range.
- Warmth/Hallow – if there is not enough mid range, it might feel as if the middle is dropped out. That is because the mid range gives the warmth of each instrument and often the“body” or textures that are pleasing to the ear. Too much will sound bulgy/muddy and too little will sound hallow or bodiless.
- Boomey or Heavy – this is often a description when there is too much low end in the instrument or mix. Perhaps the low end of the drums are just too loud, thumping away. Or the bass is toofat and boomey. This is likely to be addressed by the Low End.
- No Bottom – when there is no bottom you feel like there are no legs for the music to stand on. There is no weight to the audio and it has no impact in the low audio spectrum.
These 8 terms around EQ will be able to translate any conversation or change requested to your audio professional.
These are a bit easier then EQ (which is really the most complex of all the tools):
This creates the echoey feel of being in an environment like a hall, theater, bathroom, studio, outside, etc. This effect is always added to the individual instruments (not to the final mix
– although too much reverb on all the instruments will make the final mix sound very “reverby”). Key terms and their related effect are:
- Wet – this means there is a lot of the effect on the instrument (or the overall mix can sound too wet)
- Dry – not enough reverb is on the instrument and the instrument sounds too natural
- Depth – Reverb often gives the spatial perception that there is depth to the instrument, or distance between the listener and the instrument. Sometime you prefer more or less depth
This simply takes the sounds and repeats it again and again and again and…
- Wet and Dry are used in the same way here
- Faster or slower – the delay can go faster or slower and usually should be in sync with the tempo of the piece
- Sync/Out of Sync – The delay will usually sync to the overall tempo and give you each repetition in time with the song. If it is not, it is “out of sync”
Volume and Pan
Volume and Pan are the most simplistic of the tools used to change the sound, but often overlooked.
It is often easy to identify an instrument that is too loud (or too soft), but volume can also be used to reduce an overly-complicated piece of music, bring the melody more to the forefront, or
help sculpt a sound effect better. Volume will give more or less presence to something (depending on whether it is loud or soft). Words and phrases like: "subtle", "bold", "in your face", "off in the
distance", "closer/further away" – are all related to volume changes.
This is the location in the stereo field of an instrument or sound effect. Often forgotten or ignored, this is a powerful tool that (especially combined with volume) can cause a sound to feel like
it is moving. This is a great way to create “perspective”, “movement”, “position” or “change” (all words that relate to pan). For those of you that
work in Surround, this is a must to explore.
This is where music and sound effects diverge a bit, so we will discuss them separately.
The instrument level discussions are incredibly important. This is the raw input you will be using to build your final product. Without the understanding of how to communicate your desires at this
level, your composer might fundamentally start in a different direction then your vision.
- Instrument selection – knowing your instruments and what sounds they make is vital. What instrument do you request when you want western music? (Dobro, Fiddle and Harmonica) How aboutHawaiian? (Lap Steel, Ukelele and Stringed Bass) Relying on your composer to know these instruments is fine, but it is to your advantage to understand the layers and how these instruments work ineach style of music. How about orchestral? (there can be hundreds of instruments). Or in a cinematic/film score – this could be any combination of orchestral instruments with moderninstruments… Approaching your composer with a sense of what you like and dislike in the style of music you are trying to get is very helpful. (Note: Also, a good understanding of musical genresis important here. Asking for electronic music is too vague. There are hundreds of styles of electronic music. Knowing whether you want Downtempo, Break Beat, House, Electro, etc. is very helpful.This goes for all other genres too. There are lots of sub genres in every genre. Spending a couple evenings going through iTunes and listening to the different radio stations (which are clearlylabeled by Genre and sub genre), while trying to clearly identify what instrument is doing what, will do wonders for your ability to communicate about the music.)
- Performance – another part that makes up the instrument category is the performance of the musician. Sometimes the musician is your composer, sometimes the composer is producing othermusicians. The performance of the individual instrument is key to a good sounding piece of music
- Instrument/Sample Quality – the instruments/samples used for each recording should be the best available or to your liking. Although you do not have to know the difference in the quality ofa Martin Guitar versus a Gibson Guitar, (or a Les Paul versus a Fender), if the quality of the instrument does not work for you, that should be individually addressed.
A sound designer is using a very different type of sample building process. This will fit into one of three categories:
- Organic Sample – this is a recording of an organic, real life sound: i.e. – Footsteps on gravel, door closing, explosions, etc.
- Synthesized Sound – sound generated from a synthesizer (virtual or hardware). In case you are not fully hip on today’s sound design tools, many are virtual and are highlysophisticated sound generating engines with all sorts of complex synthesis going on. Few sound designers are still using physical hardware. These virtual synthesizers are called Virtual Instrumentsor sometimes Plug Ins.
- Hybrid – since the Virtual Synthesizers need a source of sound, sometimes they use a fully synthesized sound generator, or they can take the organic recording and process it through itssynthesizer engine to create an entirely morphed creation. i.e. taking a recording of a cow mooing, and processing it through a virtual synthesizer (by slowing it down, pitch shifting it way down andadding some synthesis magic to it) can give you a ferocious roar of a demon creature.
Sound effects are often many layers of each type of sound and potentially heavily affected (with plugins or effects). Many times sound designers are using sounds that have nothing to do with the
actual use (i.e. – a roar of a lion might consist of a slowed down cow moo, a drawbridge closing and a large train rumble). Therefore, if you are trying to sculpt a sound effect and are working
on the “Sample/Instrument Level” to make changes, you might want to ask your sound designer what the layers (or effects) being used to make this sound are. Perhaps just a sample is not
working, where you might think the sound doesn’t work at all. In fact, many sound effect changes that go from “I hate it” to “I love it” might be quite subtle changes.
Structural based discussions start to lead into the concept, but are still technical, so we separate them for purpose of thoroughness. These conversations are again different for music and
When music is being designed, each composer should be able to (not that they do) literally chart out the structure of the piece they are writing. Here are some important structural words you
should know to help sculpt changes to the music:
- Tempo - also called BPM, which stands for Beats Per Minute, is the speed or pulse at which the song is maintained. If you tap your foot to the beat, this is the Tempo/BPM. It is good to have avague idea of the BPM you are looking for (using words like up-tempo (BPM of 120+), mid tempo (100-119 BPM) and downtempo (80-99) are very useful for a composer. Note: these are not absolutedefinitions, but are generally accepted.) Tempo can change in a song to pick up or slow down (tempo changes can create a very dynamic feel and are challenging to pull off well in a piece ofmusic).
- Time Signature – how many beats in a measure. Usually 4 beats (like in a regular pop song), or sometimes 3 (like in a waltz - but sometimes used in popular music). There are others, butmuch less encountered.
- Bar/Measure – each bar of music contains 3 or 4 beats depending on the time signature (and something completely else if the time signature is different – this again, is rarelyencountered). To point to a particular Bar/Measure you want to address you can count from the beginning of the piece. (i.e. in bar 22, the saxophone is too loud.) Sometimes pointing out music basedon the time (mins and secs) is more convenient depending on how the music is being reviewed.
- Melody – this is the hummable part of the song, always played with one note at a time. Usually this is very obvious and pronounced in the music, but sometimes melodies are subtle.
- Harmony – these are the surrounding notes or chords to the melody (if there is no melody, it is all harmony). This is basically everything else working together to create the music.
- Arrangement – this means two things: 1) the way the instruments are layered over each other, and 2) the way the song is played out linearly from beginning to end. Both of these uses areincredibly important in discussing the music. To be more precise about which one you are talking about we say the “Instrument Arrangement” (referring to the layers of instruments makingup the song) or the “Song Arrangement” (referring to the sections and order of sections in the song).
- Verse – the part of the song that contains a different melody each time it is played (and different lyrics if applicable)
- Chorus – the part of the song that contains the same melody each time (and lyrics if applicable)
- Bridge – different section than the Verse or Chorus, usually connects two choruses
- Intro – the introduction to the song, or the build up to the verse
- Outro – the end of the song, after the last chorus
- A section – when there is no verse/chorus structure, we might refer to the first section as the “A” section. The second section will then be referred to as the “B”section and the third as the “C” section (sometimes A means verse, B means chorus and C means Bridge)
- Climax – the highest or most dramatic part of the song
- Build – this is an area that will build to a climax or might build to a specific section
- Tension/Release – this is a type of structure where one uses tense sections of music and then releases them into pleasing resolved parts – this helps bring drama but not make thepiece overly tense.
- Call and Response – a style where the melody (or Rhythm) of one instrument will be reflected back by another instrument playing a similar melody (or Rhythm) sequentially (one afteranother).
Structurally, sound effects can be found individually or in sequences. Although there is not much to say about the structure of sound effect design (the juicy stuff is in the concept phase). What
is important to note here is the way sound effect designers use layers. Often effects are layers of individual sound effects all working together to create a final sound or sequence. Teasing apart
the individual layers (or adjusting them with Volume, Pan, Effects, etc), can often make a difference when trying to revise an effect.
Go Get Em!
Armed with new tools, a deep perspective on creative design and a whole new world of being able to relate to your audio professionals, I look forward to hearing the audio you will create for your
games! Have fun! If you are interested in hearing the new soundtracks coming from the SomaTone Interactive Audio Team – check us out at