yjbrown

Members
  • Content count

    197
  • Joined

  • Last visited

Community Reputation

646 Good

About yjbrown

  • Rank
    Member

Personal Information

  1.   Usually someone will start by: 1. Figuring out the audio direction - style of music, style of sfx. 2. Figure out audio features (ie footstep system, ambiance system.. etc) 3. With both 1 & 2 in hand - go through the game and create a rough spotting list of sounds to generate a rough asset list. 4. Get hold of any material that will help you create the initial assets - ie start with things that don't have specific animations but are core sounds like footsteps and foley - then work up from there - ambiances, weapon sounds, pickups.. etc.       Concept art for music - or screen captures. SFX I want animations and or gameplay capture, or vfx captures.     Depends on the game - some I've started with story board but more often than not, I have game footage.     From the beginning. The planning, and documenting of audio and figuring out what audio technical issues are to be dealt with and what audio systems will be needed and rough designs will be created initially. Also figuring out team members, designing workflow, tool support -  all the early stuff that technical sound designer / audio lead would be involved with. External contractors could be engaged at any point in the project, but usually mid-project is when head count needs to expand to deal with development cycle being in full swing.
  2.   Mod based files (created by Trackers) were used for Atari ST and Amiga games.   They were also used in early PC games when space was tight.   See example:  Star_Control_II  Jazz_Jackrabbit_2 (check out who the composer is). Deus_Ex  (this guy is everywhere... *wink*)   Something that you might find interesting if discussing polyphony is:   1. The concept of virtual polyphony vs real polyphony:   In general midi you are restricted by real polyphony. 1 note, 1 midi instrument sound.   In a mod, you are still restricted 1 note, 1 sound. However within that sound may be more complex sounds - such as guitar chords, drum tracks or layered (kick + hat) , choir riffs and even multi-instruments layered into a single note. Used to use these tricks to create more than 4 channel sounds on old .mod files.     2 . Simulated polyphony: a. Fast arpeggio notes as in in chip music give the illusion of chords and trick the ear into hearing more polyphony than there actually is. b. Interspersed notes in the channel - this technique we used to use to make it sound like a note that was cut off by another sample playing was continuing, echoing or reverb but placing the samples between other shorter samples.   That being said - trackers are just tools - I've used trackers to create midi for mobile phone games, I've used trackers also to create rendered music for games.   Something that you might find interesting is the concept of virtual polyphony vs real polyphony.
  3.   This averages to about 12 seconds per line. with 3 takes one after another then you get about 20 minutes of final audio.   So if the actor is able to nail the performance, delivery quickly in 2 takes you get more.   If the average line length shorter then you get more... etc.
  4. By the way, I highly highly recommend this piece of kit    https://auphonic.com   It's a batch processor which takes a lot of guesswork levelling dialog.   I run it both before processing and mastering to get my VO to a consistent level across the session, and then once more after processing and mastering to ensure it adheres to the final dialog levels we wish to have in game.
  5. Typically we budget about 100 lines per hour - given 2 maybe 3 takes per line.   If the lines are story driven then the delivery can take some work to nail down.  If the weight of the lines have to portray a certain emotion or will drive the animation, more time can be spend on single lines - like the 'wow' example. A branding word, or phrase will get a high priority and many takes to achieve the vision.   Typically if you have a lot of dialog, you want someone from the client to be present - to help provide direction rather than guessing what they want and then having to re-book studio and or actor again.   If you have a lot of dialogue, it may also be worth splitting it into multiple sessions so that you can take the feedback and apply it to retakes if necessary in following sessions.   We have done some character based games where most of the dialog is callouts 2-7 words only, we average over 1000 individual takes which works out to about 300-ish lines in 2hours of recording and so the editing takes up to about 8-9 hours after that to name and find takes by an editor.
  6. Haha.. the gekko - sounds either like a creak door, or a land dolphin..   I wonder if you slow it down it may sound like choirs singing...
  7. Do you ever hold back on music

    I'm wondering why you would want to hold it back in the first place. Did you sign away all rights to the music in the contract?   I've written some pretty good tunes in my time for clients but never held them back - I didn't have any reason to. Part of the business is learning to let go, and also to negotiate better up front.   Like CCH says : Unless contractually specified it's up to you to put the piece forwards for review / acceptance or not. Think of it this way, even if the game is low profile, would you not want it out there published as an example of your skill - part of your portfolio of what you can do for games. You can think of it as an investment rather than a loss.
  8. Which to learn first: Wwise or FMOD?

    Agreed with Brian.   It's the concepts you want to understand. The middleware provides the way to implement those concepts.   It's like saying - should I learn ProTools or Nuendo first?   Learn signal flow, basic concepts of what a DAW does, understand audio theory and mixing concepts.. then apply those to the tools.   I'm going to be difficult, and say - learn both at the same time!   If you can, try to do tutorials in implementation for the same material in both tool sets so you can get a good understanding of the differences.
  9. You want to go with professionally mixed and mastered tracks. problem is with royalty free tracks, you never know if they have been mixed or mastered well to begin with. They're usually just a company collating a whole bunch of composers works to license out.   To learn clarity and mixing and mastestering, you can use commercial tracks to deconstruct and analyse. While I was at SAE, I took tracks from CDs I owned to reference and analyze. Mr Cab Driver from Lenny Kravitz was one of them. Macy Grey had some I used too. Most of the big budget commercial game tracks should have some credits to mixing and mastering engineers and those are some that you could look at too. If doing orchestral works, film scores are also a great resource.   What kind of genre are you looking at improving?   Use of good dynamics and not cluttering your songs also helps with the mixing process, it's not all about trying to fix the issues in the final stages, but choosing the right instrument tones, separating instruments into the bands they use, and removing unwanted frequencies that create additive mush to other instruments. If all your instruments are going all the time, and are a constant volume it's going to be hard to mix. Rather than throwing multiple different reverbs on everything, use reverb sends, and group the instruments to send to those reverbs.   Typically in a song I may only have one vocal reverb and other instruments may share another reverb. Controlling bass and kicks together is another technique.. so many and different songs call for different techniques - which a lot I learned by experimenting and reading around and listening.
  10. One of the things you can help is to imrpove your analytical listening. Active listening involves training your ear to hone in on elements in a song. Taking a whole mix and deconstructing it so you can analyze how it was constructed in the mix.   Reference tracks are songs that are mixed well in the genre you are mixing and using them as a tool to help re-create the same EQ, reverb, panning and volume for the parts. Then you can A/B your mix with the reference track and see how they compare.   Draw a square on a piece of paper. Listen to a song you're using as a reference.   Within that square, draw what instruments and where they are in the stereo field, how far back they are in the mix - volume, placement with wet reverb.. etc - how you percieve it. Identify processing used - chorus, flange, echo, reverb.. eq.   You should be able to construct a fairly good graphic representation of a mix. This is the blue print you can use to apply to your own song.   These are actual exercises we had to do in an audio engineering course. We also learned how to hear individual frequencies on a 20 band EQ. Our lecturer used to test us twice a week by boosting or ducking one of the bands.    Mixing is like playing an instrument, you don't just do it, you need to learn the basics first and then practice listening to other's play to try to emulate, and practice playing to improve. A lot of this skill is practice, and learning how to hear things in the music and then understanding how your processing and eqing will affect that mix.   There's no specific 'reference track' library persay, just songs that you aspire to mix as well as.
  11. Music Demos Opinion

    You have a talent for the wierd..   I'd say push this style more.. very suitable for many games.   BRATISHKA RUSTY CAROUSEL
  12. 2015 Game Music and Sound Industry Survey

    done brian!   worth filling out if you haven't and in any way or capacity have done audio work for games.
  13. So what do you guys think is holding me back?

    Whilst good libraries can help your overall quality I feel the target for your efforts lies in the orchestration and song production. The first song on soundcloud feels very loose in the timing between individual parts.   There are held chords which drown out the other parts in both the 1st and 2nd songs and cause clashes with the other parts. Depth of dynamics in the individual parts or as a whole create that flat sound which takes away from emotional impact.   Your 3rd piece is stronger but still clashes sonically and had no movement in the dynamics but has more cohesiveness than the two earlier tracks   You don't need super high quality samples to get work but it can help with presentation.   Don't worry, I started out with similar issues but one of the things I learned was how to polish the hell out of whatever I wrote with whatever samples and libraries I had at my disposal. I think that is where you need to concentrate your efforts. Concentrate on how to use frequencies of various instruments together and how to compliment the different parts together. Learn to analyze your own music and other people's. You can learn a lot by analyzing their techniques to improve your mixing both in terms of instrument frequency, mix frequencies, support instruments, and over all production will help improve you to a point where you can listen to your music and say to yourself that it has a quality high enough for someone to pay for it. Example https://m.soundcloud.com/groovyone/gba-orchestral Those are not realistic samples at all and while specifically targeted at game boy and ds market listen to how the instruments either work together or compliment or offset one another. https://m.youtube.com/watch?v=DDIQNcnWZyQ This is an example of the same type of low quality not realistic samples working together and produced differently (eq, reverb, compression) but together everything feels cohesive. If you can produce good tracks with the tools you have then it makes it easier to produce good tracks with more advanced tools.
  14. Mastering Seperate 'Adaptive Audio' Tracks

    Also have a look at Fabric for Unity 5. It comes with a lot of good features to help implement your audio well, and is native to unity.   You have access to mix states, mix bus and also DSP as well as any other DSP that Unity provides.   Unity native audio showcases some of the DSP functions with unity 5 - the side chain compression is something you may need to use (also a feature in wwise). http://blogs.unity3d.com/2014/07/24/mixing-sweet-beats-in-unity-5-0/   Fabric Unity Audio Engine provides a lot more than the basic unity audio - tempo based music switching..etc though I don't know just yet how complex a music system it has. http://www.tazman-audio.co.uk/   Both Fabric and WWise can be obtained free to experiment with before choosing one to license.    Dynamic mixing for games is definitely an art in itself and a lot of fun to make it work   It really depends on how technically your music is adaptive. Does the mix just change because of one stem gets added - ie there's a basic mix and then another stem is layered in, or does the music need to get to a transition point and change. That will drive what middleware or solution you choose as well. I have not explored Fabric's dynamic music system at all, but have used WWises and FMOD desgner's a lot. Basic knowledge of FMOD from talks and demos I've had with the guys and the previous designer system both FMOD / WWISE have outstanding dynamic music systems and authoring tools where you can prototype the behaviors.  Both also work with unity, with an external plugin. It's so great to have so many choices these days!     As far as standards, a lot of developers for console / tv are adapting the ITU 1770 standard of loudness. WWise actually supports metering with this to help achieve the right mix. For handheld / tablet.. etc - there is no real standard yet since we're dealing with different speaker types sizes. 
  15. Mastering Seperate 'Adaptive Audio' Tracks

    Hey Matt,    This sounds fascinating, you can certainly try to prototype this in wwise.    Experiment with 3D atttenuation graphs.   Try using a music bus and running some DSP (wwise supports things like compressors and side chain compressors / ducking, eq... etc) over it. There's some 3rd party vendors of some fairly good DSP out there as well.   It really depends what you're trying to do with the objects, how many other tracks play, do they fade out to silence, or just duck a bit to focus on the other object. Can you within a certain radius fade over to the produced track mixed fully. Lots of different things to try. Some you'll need engineering support and a way for you to tune the results, most you can try to protoype in the audio authoring middleware.