Sign in to follow this  
kernel n bytes

I cannot find any good tutorials on designing sound for games

Recommended Posts

I am currently developing a music game and I need to design sounds for the menus, and sounds for in game objects. I've been using audacity. It's good but I can't find any tutorials on the program or, more importantly, sound design for games.

Share this post


Link to post
Share on other sites
I'd imagine that menu's and even ingame sound feedback wouldn't be too different from standard UI usability guidelines about how to make things clear/consistent/effective. (someone feel free to disagree)

Share this post


Link to post
Share on other sites
Quote:
Original post by kernel n bytes
I am currently developing a music game and I need to design sounds for the menus, and sounds for in game objects. I've been using audacity. It's good but I can't find any tutorials on the program or, more importantly, sound design for games.


Depends on what you want (or need) to know. Sound design can involve anything from blundering about the planet with sound recording gear, to building your own sounds waveform by waveform using synthesis techniques.

Synthesis was the only method available during the early years of gamedev, since computers and consoles of the day couldn't play samples. (Well, they could, but not very well. Even the Commodore Amiga, which introduced sampling to the home market, were limited to 8-bit audio.)

Sampling is, oddly enough, one of the earliest forms of synthesis. It began in the 1940s with the invention of magnetic tape, on which musicians would record short snippets of sound. They would then cut the tape up and rearrange it to produce the final piece. (To make different pitches of sound, they'd play the tape recordings at different speeds into another tape recorder.) This method was invented in France and known as musique concrète. It is perhaps most famous in the UK for its use in producing the iconic original theme music for "Doctor Who" back in 1963. (Google for "Delia Derbyshire" or "BBC Radiophonic Workshop" of you want to learn more about the musique concrete process and early sound design. It makes for fascinating reading.)

Musique Concrète is the reason all music sequencers today have user interfaces derived from magnetic tape recording.

Synthesis is something all musical instruments do: they produce waveforms by producing vibrations. Stringed instruments use vibrating strings (obviously), while woodwind instruments use air over blowholes to produce their sounds. Others might use reeds, hammers on wooden blocks, metal strips or simple strings.

Electronic synthesisers use increasingly complex waveforms instead, generated by oscillators. Oscillation is vibration, so a sine wave oscillator will produce sinusoidal waveforms; a 'sawtooth' oscillator will produce a waveform that looks like the triangular blades of a saw... and so on. As electronics technology has evolved, so has the complexity of the oscillators, and also the number of such devices fitted into a synthesiser.

Amplitude modulation ("AM") is the oldest form of electrical synthesis. You can hear it in action in the Theremin, which produces a simple sine wave, modulated by the hands of the operator. The operator can change the volume (i.e. amplitude) of the sound by moving his hand along one sensor; the other sensor controls the pitch of the waveform.

Frequency modulation ("FM") uses rapid changes in frequency to create sounds. The trick with FM is to take advantage of wave harmonics when using this technique. Get the right frequencies and you can create some very 'thick' sounds. This technology first appeared in the late 1960s.

(Both AM and FM synthesis came about through work on radio technology, which is why you've most likely heard both terms before.)

The 1970s saw FM synthesis really catch on, with the Yamaha DX7 synth becoming very popular. Most computers and consoles of the late '70s and '80s used AM synthesis, with the Commodore 64 being one of the very few to include an FM synth chip.

Computer-based sampling first appeared in the 1970s. RAM was expensive back then, so it was very basic for a while, but it hasn't really changed much over the years. Sampling remains a kind of virtual magnetic tape and the same principles usually apply.

Today's most common synthesis technologies are:

1. FM synthesis.

2. "Granular" (or "wavetable") synthesis -- basically gluing lots and lots of very, very short samples together;

3. "Resynthesis", which relies on the fact that all sounds can be broken down into sine waves. Resynthesis uses this property to break down a sound into simpler sounds which you can then play with. It's still in its early days, so it's not perfected yet, but it's rising in popularity.

How these techniques are implemented varies from device to device. Some add synthesiser waveforms together; others subtract. And there's a hell of a lot more detail to each technique than the vague hand-waving I've given above, such as ring modulation, feedback, distortion, reverb algorithms, white / pink noise generators, and so on.

*

Sound design is the process of using the above knowledge to produce / create sounds to suit the subject matter. A simple upwards 'swoop' sound can be produced by simply playing a sine wave and raising its pitch over time.

Audacity contains some basic synthesis features, but it's not really ideal for sound design. Most designers will use a master controller -- usually a keyboard -- connected to a computer running some dedicated sound synthesis software. (Many keyboards include their own synthesisers too, but a computer offers much more flexibility and choice in synthesis techniques.)

A much more in-depth article can be found here.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this