Jump to content
  • Advertisement

Search the Community

Showing results for tags 'Theory'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Dev Loadout
  • Game Dev Unchained

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Art Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum
  • Unreal Engine Users's Unreal Engine Group Forum
  • Unity Developers's Forum
  • Unity Developers's Asset Share

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 110 results

  1. I'm trying to calculate normals of my grid surface. The map is 29952px x 19968px and each cell is 128px x 128px. So I have 36895 vertices. My flat map array is sent to shaders with the following structure: float vertices[368950] = { // x y z znoise xTex yTex xNorm yNorm zNorm Type 16384,16256,-16256, 0, 0.54, 0.45, 0, 0, 1, 1, 16256,16384,-16384, 0, 0.54, 0.45, 0, 0, 1, 1, ...... } I calculate the zNoise with a function float noise(float x, float y){}; And it works (I add it to y and z in the vertex shader). Method 1 If i calculate normals using finite-difference method i obtain a nice result. Pseudo-Code: vec3 off = vec3(1.0, 1.0, 0.0); float hL = noise(P.xy - off.xz); float hR = noise(P.xy + off.xz); float hD = noise(P.xy - off.zy); float hU = noise(P.xy + off.zy); N.x = hL - hR; N.y = hD - hU; N.z = 2.0; N = normalize(N); But, in the case I need to edit the map manually, for example in a Editor context, where you set the zNoise with a tool to create mountains as you want, this method won't help. I get this nice result (seen from minimap) (Normals are quite dark purposely) Method 2 | | | --6----1----+- |\ |\ | Y | \ | \ | ^ | \ | \ | | | \| \| | --5----+----2-- +-----> X |\ |\ | | \ | \ | | \ | \ | | \| \| --+----4----3-- | | | So i'm trying to calculate the normal using the adjacent triangles, but the result is very different (it seems that there's a bug somewhere): Code: std::array<glm::vec3, 6> getAdjacentVertices(glm::vec2 pos) { std::array<glm::vec3, 6> output; output = { // getVertex is a function that returns a glm::vec3 // with values x, y+znoise, z+znoise getVertex(pos.x, pos.y + 128), // up getVertex(pos.x + 128, pos.y), // right getVertex(pos.x + 128, pos.y - 128), // down-right getVertex(pos.x, pos.y - 128), // down getVertex(pos.x - 128, pos.y), // left getVertex(pos.x - 128, pos.y + 128), // up-left }; return output; } And the last function: glm::vec3 mapgen::updatedNormals(glm::vec2 pos) { bool notBorderLineX = pos.x > 128 && pos.x < 29952 - 128; bool notBorderLineY = pos.y > 128 && pos.y < 19968 - 128; if (notBorderLineX && notBorderLineY) { glm::vec3 a = getVertex(pos.x, pos.y); std::array<glm::vec3, 6> adjVertices = getAdjacentVertices(pos); glm::vec3 sum(0.f); for (int i = 0; i < 6; i++) { int j; (i == 0) ? j = 5 : j = i - 1; glm::vec3 side1 = adjVertices[i] - a; glm::vec3 side2 = adjVertices[j] - a; sum += glm::cross(side1, side2); } return glm::normalize(sum); } else { return glm::vec3(0.3333f); } } I get this bad result (seen from minimap) unfortunately Note: The buildings are in different positions but the surface has the same seed using the two methods. Could anyone help? 🙂 Thanks in advance.
  2. Hi there, I'm currently developing a turn-based single-player top-down strategy game: the player against simple swarm-like opponents which use spawners to spawn units. Summary of game mechanics There a factories which can produce three different basic "bot" units: attack bots (= stands for aggression), defense bots (= stands for survivability) and utility bots (= stands for flexibility, mobility, range etc.). The bots don't do a lot by themselves, but any three of them can be combined to create another unit with a role based on the ingredients used. E.g. combining three attack bots must result in a highly aggressive unit. My goal Tweaking the types of units available to the player. I'll need 10 units which have a specific role reflecting the ingredients used to create them (see above). Additionally, the role of each unit needs to have enough room in its concept to allow upgrades into a specialised variant of the base role of the unit. What I do have so far: Attack bot (= "A") +A+A: "Berserk style" close-range damage dealer Variant 1: bouncing around causing chaos Variant 2: self-healing on kills A+A + defense bot (= "D"): mid-range damage support unit with pushbac Variant 1: sniper Variant 2: faster shooting speed and increased damage to undamaged targets A+D+D: close-range defense support with minor damage, e.g. area of effect slowdown Variant 1: susceptibility debuff and focus on one target Variant 2: ? More slowdown and more area of effect? D+D+D+: heavily armored tank unit which can push targets away when moving Variant 1: spikes damaging attackers Variant 2: armor A+A + Utility bot (= "U"): long-range area of effect damage Variant 1: damage over time and large area of effect Variant 2: ? A+U+U: army movement and attack efficiency manager, e.g. damage buffs, pushing units further to the frontlines, terraforming etc. Variant 1: attack focus, i.e. stronger buff? Variant 2: movement focus, i.e. better terraform, teleport units U+U+U: general supply unit. This is a unit which restores energy to units so they can move/attack again Variant 1: balancer, i.e. making sure to spread energy well among allies Variant 2: ? D+D+U: healer Variant 1: overheals Variant 2: long-range health transfer among allies D+U+U: ambush manager: "hook" skill which drags enemies, spawn a barrier which soaks up damage and blocks the path Variant 1: invisibility buff and long-range hook skill Variant 2: ? mind control debuff? Better barrier? A+D+U: ? Roles I cannot add: Jack of all trades: even if balanced, this makes all other units somewhat optional because it can essentially fulfill every role Scout: there is no fog of war, so no scouting role is required Am I missing any basic unit role I could add? How could I fill the underlined items in the list above? Are there any duplicates in the roles which I could merge? Looking forward to your ideas and feedback!
  3. I'd like to know the whole process of creating a game from the beginning to the end. What I have to do first, what I have to note for the future, what issues need to be identified. What needs to be done after the game idea is formed. What kind of specialists will be needed. How can I tell correctly my idea to specialists and get what I need from them. How can i become the connecting link between them. How much time it will take to create the game. What additional costs await the company in addition to remuneration and advertising. My game is going to be as simple as possible. Probably it is going to be the same type as VooDoo games or sth. May be like "Helix Jump", "Rise Up", "Balls VS Blocks" and etc.
  4. Hi guys, I've an OpenGL question, which is quite math ad linear algebra related. Let's assume we have two coordinate systems, S (scene) and O (object). I'd like to place O inside S, so I need O' (in S coordinates). Using the following transformation matrices I can do that: rotation, scale, displacement. So far so good. I have two questions though: 1) assuming the place of O' is specified with 4 points (zerus, and one for each axii unit vector end points) how can I calculate the required transformation matrices? It's a "simple" case, as let's say points are P0, P1, P2, P3 and x = P0->P1, y = P0->P2, z = P0->P3. Also |x| = |y| = |z| (all has the same length) and they enclose 90 degree with each other. This surely can be solved using standard GL transformations easily, I just need an algorithm to calculate the matrices from P0, P1, P2, P3. 2) the more difficult question, how can I do the same if O' can be distorted, so |x| != |y| != |z| and their angle is not necessarily 90 degree? (Imagine that you "sit" on O, so O' became stretched and it's top became larger and moved to the side, so that angle(x, y) = 80 degree for example). How to get the required transformation matrices in this universal case, when you only know P0, P1, P2, P3? Hope it's clear what I'm asking. I need an algorithm to generate a transformation matrix that I can then use to transform all points in O into O'. bzt
  5. Unlock Audio

    Functions of Sound in Games

    One of the wonderful things about sound is that it can accomplish many different things. Additionally, the same sound used in one context can have a completely different meaning in another. This is true from an emotional, informative and clarity standpoint. In my game audio classes at DePaul University, I always point out moments when we hear the same thing in games but have a very different response or reaction to them. Sound is powerful – and if you work in games, you should think about its capabilities for your project or work with someone who understands what effects it can have and all the ways it can be used. A single sound can be doing many different things at once! So with that in mind, here are eight ways sound can be used in games: Contextual/Narrative Sound This is probably the most straightforward entry on this list. When an action happens such as a character moving, using an ability, or the player selecting something in the UI, we need to hear something that seems “appropriate” concurrently. If we don’t hear something when expected, it can be one of the most immediate ways to lose that sense of suspending disbelief or “buying” into the experience. These sounds need to be present but also need to be choreographed to the visual gesture. Starting or stopping “out of sync” is just as much of a glaring error as not having the sound at all. Pretty much every game is chocked full of sound filling this role, but for an even more visceral example, check out the game, Perception. The premise of Perception is everything we see is based on sound reflected from the world. Think of it as similar to echolocation. If something isn’t generating sound on its own, the only way we see it is if sound travels out and bounces off surfaces in the environment, and returns to the listener. Everything we see is based off of sound, so if we see anything, it is because that action/object/event has an intrinsic sound with it. For some, seeing all the sounds that populate our game worlds can help make it clear how vital the sounds are. Focuses Attention A very powerful intent from a design perspective is what our player is focusing on. Are they marveling at the art or environment of a new area in an RPG with a massive world? Will they be able to make the jump from one level to another in a platformer? Do they need to be ready to dodge an enemy attack? Most times, the auditory and visual cues work in conjunction with one another. This makes it very persuasive in telling the player that something is important and deserves their attention. However, having separate visual and auditory cues can be very powerful and have incredible effects on the player. Look at this sequence from Amnesia: A sense of danger is communicated through an invisible monster splashing through the water chasing you as you jump from box to box. Can you imagine how boring hopping between boxes would be without hearing the splashing footsteps coming after you? Are the boxes the real focus this whole time? No, not at all! That constant auditory reminder of impending doom is so strong! So strong that the player doesn’t need to see the footsteps of the monster in the water to be utterly terrified of it. Defines Space We are used to different spaces sounding differently. If you yell in a small room, it sounds very different than yelling in an empty sports arena. Not only does it take longer for sound to reach a listener’s ear in a larger space, but when we are in large spaces, most of what we hear is reflected sound as opposed to direct sound. The sound of our voice goes out in every direction, with very little of it going directly to a listener’s ear when we’re in a large space like an arena. A listener may still hear this sound even if it doesn’t travel directly to their ears, but not after it’s bounced off a number of surfaces. This is what’s called reflected sound, and it’s most of what we’ll hear in a large space. In a small space, our listener will be closer to us. This means more of our voice will go directly to their ear, and the reflected sound will take less time to reach their ear. This gives a very different character to everything we hear in a small space as opposed to a large space. Additionally, the materials present in these spaces play a huge part in what sounds we hear. We hear certain types of sounds more when hard flat surfaces are present as opposed to curved cloth couches. If we don’t acknowledge and emulate these sound characteristics, our game worlds will never feel real. Game audio folks spend a lot of time ensuring game worlds feel real. Here is a portion of the implementation used in Hitman 2 to ensure this happened: Creates Atmosphere/Mood This is referencing the emotion a player feels while experiencing your game. While the previous point was pertaining to making a space feel right in terms of physics, this point is in terms of emotion. A large space can be awe-inspiring, majestic, threatening, magical, exciting, intense, etc. – the list could go on forever! Every sound and note we hear has the ability to give an emotional impression, but only if we want it to - and know how to execute it well. Watch the opening to Bioshock on mute. It looks creepy, but this experience lacks any sort of visceral emotional response. Now, play the opening with the sound on and close your eyes. Pay attention to how much emotion you feel with no visual component! Of course, the real impact is achieved when we have both the visual and working together, but pay attention to where the emotional part of the experience is coming from. Emphasizes/Intensifies Action My favorite example of this is Doom as well as any Tarantino movie. More than having a sound be present for a gesture, audio can add a layer of intensity that isn’t naturally there. Any team creating an experience that is highly “stylized” will have given significant thought to how the audio contributes to the overall aesthetic because it is that important! Case in point – if it hadn’t been incredibly thoughtful about what the nature of Doom was, the end experience could have missed the whole point of their world! Don’t believe me? Listen to this: A far cry from the intended experience of Doom, right? Heres the correct audio. This effect can be true for entire soundscapes like in Doom, but it can also be true for single ability or action sounds. There are certain sounds that just aren’t appropriate for a healing spell, right? How can we communicate something positive, negative, disorienting, dangerous, all in a single sound? Here are a number of ability sounds I created for the game Card Chronicles: Sentinels that all needed to communicate the nature of the ability being used by the quality of its corresponding sound. What makes the healing spell sound feel appropriate for a healing spell? Promotes Immersion (VR) This is very different than the contextual/narrative and space-defining audio we looked already. While you could describe both these capabilities of audio as making something in your game feel “believable,” immersion is the sense of the player actually being in that space. Immersion is getting someone to lose the sense of their physical self and feel like they are actually in another space or occupying a body other than their own. Your attention shifts from controlling something in a digital world using your physical body to feeling as if you are actually occupying the digital world. That is a huge jump. Advancements and accessibility of technology have made techniques such as spatial and ambisonic audio integral to the VR experience. More than hearing something to your left or right, we can accurately simulate that qualities of sound emulating from different points in 3D space in different sized spaces, with different materials, and when you’re looking in one direction versus another. But the ultimate audio sensation of immersion is through binaural audio experiences. Not only do we hear the qualities of sound being affected by different spaces, but we can experience how the human ear perceives audio in that space as well. While ambisonic and spatial audio are incredible experiences, they are ultimately taking “believability” to a higher level. Ambisonics audio, in particular, has many applications to VR since it is a format of an entire sphere of sound around a point in physical space, and can be converted to playback in headphones. However, for the most immersive audio experience, nothing beats binaural audio. This audio format makes you feel like you are actually there. If you want to better understand what this difference in experience is, grab some headphones and listen to this: Pretty amazing, isn’t it? Now the drawback with binaural audio is that it is generated relative to where a microphone or listener is. Since we usually move in our game environments, it’s not easy to replicate accurately, and it can cause motion sickness if not used correctly. That’s how powerful this stuff is! Sets Pace as Gameplay Function The most common example of this is used in rhythm games. If you’ve ever seen a serious Dance Dance Revolution tournament or players, you know how fast and intense these games can be! A big reason the player is able to quickly and accurately time their feet to the visual cues is because of the music giving them a constant frame of reference. A different example of this is any sort of timer that has an associated audio cue. Many games have a timer that is ever-present, but when we get to our last 10 seconds or so, the timer’s audio either becomes audible or is louder in the mix. It definitely gets the point across that you need to complete an action/puzzle/objective sooner rather than later! Smooths Transitions There are a couple of different flavors to this one and many more than what I can speak about depending on the genre and mechanics of your game. But, the two that I can touch on with good certainty that they’ll be relevant to you are: transitioning between story/cinematic and gameplay as well as loading screens. Especially in many AAA titles, we are potentially switching between linear story elements and gameplay sequences regularly. In the playthroughs I’ve had with recent military shooters, 15 minutes of the game can have 2-3 moments of linear story. Using audio in conjunction with a visual effect or shift can make this transition feel effortless and seamless – almost like playing through a movie as opposed to pausing your gameplay experience. Another flavor of smoothing transitions is during loading screens. Developers have come up with a ton of great ways to make loading screens less of a “drag” on the experience such as Namco having the Star Blade mini-game. But sometimes a traditional loading screen is inevitable, and audio and music can help make these moments much more interesting. Mute the video below if you want to see how much a loading screen with audio can be a complete bore. Let’s Wrap Up! We’ve looked at eight different ways audio can enhance your game, but there are many others as well! In order to ensure your game has an engaging experience, the audio needs to hit on all of these dimensions and be purposeful in its execution. Time and thought need to be given to what you’re trying to accomplish and how audio can help achieve it. Without that, a game will be missing an entire dimension of an effective and engaging experience. Be sure to check out all the game audio awesomeness at Unlock Audio! To reach out, hello@unlockaudio.com
  6. JohnElliott

    Books and articles on game dev

    Greetings all. I'm a researcher from Portugal, with the multimedia department of CIEBA, and I'm currently writing my PhD thesis on videogame composers' expansion of knowledge into areas of sound design, implementation, programming, and soft skills. I come to ask for bibliography, books and articles, on the process of creating videogames, particularly about differences between indie and AAA production (I am focusing on indie). So far I've used O'Donnell's "Developer's Dilemma", a great read I suggest to all, as well as my own experiences, but I am in need of more sources. Thanks in advance!
  7. Dawoodoz

    Storing full games on paper?

    A musical score can be printed on paper, survive ransomware attacks, be understood by people in a distant future who doesn't speak the same language and be independent of the instrument that plays it. Computer software is however a lot more volatile when storage devices go out of fashion like floppy discs and cloud services get hacked. If someone would make a minimalistic hardware agnostic standard for describing classic computer games in paper format, what would be your preferences? Syntax: Using a high-level programming language would have all the issues of alternative interpretations, dialects and transpiling that comes with scripted languages. English keywords would be like writing in latin once the standard gets old, so I guess math is the language somehow. Storing byte-codes with scattered error correcting bit patterns would not be human readable, but to mystery games, that might be the point. File size: A tiny game could be read with a phone camera from a large book and played instantly, but a larger paper only used as a reliable backup could be scanned in a higher resolution and saved on the computer. Security: Would you trust a game that you downloaded from a physical book to not contain malware? The exploit would be very old compared to your anti-virus, but books in a library can be tampered with. Material: Would there be a point with only using cheap paper if it cannot withstand the inevitable era of nuclear winters? Would steel and wolfram-carbide printing plates be too expensive compared to opto laser crystal discs which store more data but are harder to decipher?
  8. Hey Everyone, this is my first time posting on gamedev.net before so please excuse me if I'm a little green with all of this. As VR games become more mainstream and better game mechanics are developed and refined, I've been theorizing about new and interesting ways to create interaction between players and NPC's. Currently, in the field of RPG's in particular, all quests and world interaction is delivered by NPC's or by other objects in the world such as job/killboards. Even if a quest has been fully scripted, this form of data transmission is extremely static and inflexible. It doesn't allow the player to progress the story in their own way or to have flexible conversations where the quest is defined over the duration of said interaction. Because of this, I've been taking a look into how machine learning and other forms of AI could allow for a user to receive quest data in more organic ways. Maybe we could have an AI that could process voice chat, analyze what was said and then produce conversation pieces geared toward the quests they are offering. But with no experience in the field of machine learning, I wouldn't even know where to begin. But that's when it hit me, why try to create smarter AI when you could just use humans to do the job for you? And so I searched high and low on the internet for any game that may have done this but the only games that came close were EVE:Online, with its player-driven trade market/encounters, second-Life (for the same reason), and maybe Fallout76 with it's lack of human NPC's. Nowhere else could I find player characters being the primary drivers of social, economic, and character growth. Is it so radical to think that a game and it's questing systems could be completely player driven where no NPC's exist? With VR in particular where games Like VRChat and Rec Room are redefining social interaction, hasn't the time come where we can make questing an interaction between player and player versus player and NPC? I was wondering if anyone could point me in the direction of a game that would come close to this idea? Thanks, R3ST4RT
  9. Unlock Audio

    Getting the best from your audio department

    Communication can be the hardest part of any collaborative creative process. This is especially true in art, and it’s definitely true in audio. Every community and discipline has their own language, their own slang, and their preferred way of communicating. So today, I’ll be talking about some of the ways us audio folks like to communicate and some of the ways you can be awesome at talking about audio. Now before I begin, I do want to say that any audio person or composer worth a grain of salt has (hopefully) realized that part of their job is to act as an interpreter and translator for non-audio/music people. This is true even within the audio community - composers need to know sound-design and non-musical audio terms just like sound-designers need some basic music lingo. If your audio people aren’t trying to find different ways to communicate and understand what you want/need, they’re not doing part of their job! Alright, let’s talk about some things you can bring to the table that will make audio communication easy. References If you take only one thing away from this article, please make it this! References are incredibly helpful for any type of audio person. Again, so much of the difficulty we’re discussing is communication. You can take out a massive amount of potential miscommunication by being able to hit play or send a link and say “like this.” Phrases like “make it sound heavier” , “bigger” , “higher” , “darker” , “harder” , all make perfect sense to the person saying them, but there are a myriad of ways to make something sound “higher” or “darker”, etc. On top of that, it’s not only a matter of executing the idea, but understanding the idea of the intended sound itself. Don’t be the person that says “it should sound more blue” and think you’ve effectively communicated. It may make perfect sense to you - but trust me, your audio people have no idea what that means. This is especially true when talking through emotional content and experience for music (there are LOTS of different types of “sad” music!). Have some references - know what you like about them - and say “like this.” Vocabulary Even with references, having some basic terms in your pocket that audio people know will help you immensely. In the game audio courses I teach at DePaul University, we spend a considerable amount of time getting students to talk about audio in a way they haven’t before. There are lots of words used to describe audio and music, but below are some of the most common and universally accepted. Use these, and your audio discussions will be much more efficient and productive. Pitch - the psychological perception of frequency. AKA, playing different notes on a piano. Don’t just say make it “higher.” Say “higher pitched” or “raise the pitch.” Your audio folks will know exactly what you mean. Loudness - how loud we perceive a sound to be. Using words like louder and quieter do a pretty good job of communicating this, but it happens all the time that non-audio people will try to talk about loudness and use terms like “lower” , “higher” or “softer.” These words can have lots of different meanings in the world of audio. So, anytime you want to talk about how loud or quiet something is and want to be super purposeful, throw the term loudness in there. Timbre (pronounced tamber) - the tone quality or “color” of a sound. If you play middle C on a piano just as loud as playing middle C on a viola, the difference between these sounds is their timbre. Similar to loudness, people use “higher” , “lower” , “softer” to describe timbre as well. This is fine, but you can see how easily these words can be misinterpreted. Want to be extra sure you’re communicating what you want? Use timbre in your sentence. There are many other terms us audio folks use to be very specific when talking about audio, but if you begin to use the three I’ve listed above, you will save yourself (and your audio people) so much time and headache! Your Game! No, you don’t need to wait until the game is almost done to bring in an audio person. Quite the opposite, actually! But you do need something to help you communicate your game and the world you’re creating to your audio people, even if the game is in its early stages. (Quick side note, you should totally bring in your audio collaborators as early as possible. You will be so much happier with your end audio experience. If you do this already, YOU ROCK!) For more emotional types of audio such as voice acting and music, having character, concept or environment art can be a fantastic resource if actual gameplay isn’t available. Also, if you have a significant backstory or lore you’ve created, this can be great for helping decide how this character should sound and/or how the world should “feel.” But even with these, audio folks need to know a basic outline of what the gameplay is going to be like and any sort of progression to it. There are many ways we can tailor audio to closely “fit” the game and gameplay experience, but we need to know these considerations as early as possible. Consider a stealth-action game: knowing that there is a stealth mechanic with different stages of intensity can open up a world of possibilities for composing and implementing an interactive score. For less-emotional audio, the best thing is to have video of animations or events. When I worked at a large corporate developer in the past, sometimes I would literally walk over to an animator or programmer’s desk and take video with my phone to begin the sound design. Because of the processes in place, they couldn’t send the animation to me as “final,” but I could begin the experimentation process. Plus, 90% of the time, it was the final version anyway. Realistic Expectations Audio and music are both a process - and that’s ok! It rarely happens that the first version of a sound or piece of music is ready to go into the final game. Exploration, experimentation and sometimes failure are just part of the gig. Knowing that you are always getting closer to your goal is important - especially when you’re excited to hear what your audio people have been cooking up, but it doesn’t quite hit the mark. That being said, if version 15 basically sounds the same as the previous 14 versions, that’s just a terrible audio person or serious lack of effective communication. Each version needs to be trying another interpretation of your notes or adding to what they had before. Creating multiple versions and using the differences between them can also be very helpful to communicate. But in order to do that, we first need to create those couple different versions. Overall, if you take the time to think about and purposefully communicate with your audio person instead of improvising descriptions and goals on the spot, you should be in great shape. Pair that with good references, some basic audio vocabulary and game materials (art, animations, gameplay) and your audio folks should be able to dive right in. Be sure to check out Unlock Audio! Want to reach out? hello@unlockaudio.com
  10. Unlock Audio

    Furious Seas Music Walkthrough

    Hi there! Let's look at how we can create a a cohesive and holistic score by looking at some of the music composed for the VR game, Furious Seas! We'll talk about how each individual piece of music relates to one another, some of the theory and composition techniques, and how the music interacts with moments and progression through the game itself. Feel free to ask anything and to give me feedback on this video!
  11. Hey everybody, Was looking for some advice... I've found myself trying to find a game engine I like, and 10 years later, still have not found it. My goals in this: 1. To build a Terraria style 2d crafting game - only because I want to. (Not looking to go commercial). 2. To finally learn OOP properly. 3. To do all of this within notepad or a light weight IDE. I've been programming for 20 some years, (business apps) - have a ton of working business/forms software out in the world - but I've always avoided OOP - and have never been able to wrap my head around it - specifically - the part I don't get / don't like - is I seem to have to create every single instance of an object there will ever be - in a main loop - and then bend over backwards getting all the other instances to interact with all the other instances. But I'd like to finally 'get this'. Pygame helped me come close. If anyone has a book or resource they can recommend - that gave them an 'Ah-ha!' moment, please suggest it and I will read it. I can program in C, C# (though my OOP skills suck), Python, VB, and LUA. Here's a small sample of the engines I have tried: Pygame: I've had the most success with this - but eventually hit a performance wall. I like that you can use notepad or something light weight, and get alot done in a short time - but as far as I know - 'Super Potatoe Bruh' seems to be as high as you can fly with this engine. Python Arcade: Also had alot of luck with this - but then my Linux version broke - and with it being a one man show, it doesn't seem to be the direction to head long term. Unity: I don't like that it's controlled by a company via login - looking for something I know will be around for a long time for certain, where I can't be locked out someday. GoDot: Seriously considering this - but worried I will be missing out on concepts that they hide from the user... pretty sure this is not an engine for someone wanting to finally learn OOP done properly. Don't super like that it's in an IDE, would prefer to be working in flat files. Lua Love: Also had alot of good times here - but eventually hit a wall. I don't particularily like the language and would prefer to be working in something closer to what I see at work... ie: C#, Python, etc. Monogame / C#: Got pretty far here too. Then hit a wall trying to create a button, lol. Yes a button. Seemed to me at the time - hard things we're easy and really easy things we're undocumented, and hard. Though terraria was made in this as far as I know - so I know then engine's more than I really need. Python Panda3d: Was able to get pretty far in this too - but the limited content pipeline made it tough, and I'm more interested in 2D. Python Pyglet: Found the support to be lacking, harder to find examples and community help at the time. Anyway - TLDR: Can anyone suggest an engine something else I may not have tried? I am willing to learn C++ or Java if need be.
  12. gdarchive

    Composition 101: Balance

    In physics, Balance is that point where a specific distribution comes to a standstill. In a balanced composition, all elements are determined in such a way that no change seems possible. The piece must give the feel of steadiness, otherwise, it will just seem off. Rudolf Arnheim, in his Art and Visual Perception book, stands that there are 3 elements to balance: shape, direction, and location. He also says that in the case of imbalance “the artistic piece becomes incomprehensible […] the stillness of the work becomes a handicap”. And that’s what gives that frustrating sensation of frozen time. In this simple example, you can see all this. Having the sphere off center gives the sensation of unrest. The sphere seems to be being pulled to the corner almost. It’s if like an invisible force is pulling it from the center. These pulls are what Arnheim calls Perceptual Forces. And with the sphere in the center of the walls, you have the sense of balance, where all the forces pulling from the sides and corners of the square are equal. Returning to physics, we can say that when talking about Balance the first thing that pops into our heads is Weight. And that’s what it is all about, what we think. Because, as I said before, perception is just the brain processing images. So, if when we talk about balancing something we think of weight it definitely has to have something to do with it in art, right? Exactly. Arnheim talks about knowledge and weight in balance referring to the fact that anyone who sees a picture of a scale with a hammer on one side and a feather in the other knows that the first one is heavier. If the scales are perfectly balanced it will just seem off. But balance does not always require symmetry, as we might tend to think. Isn’t equilibrium that brings balance. If the scales tilt to the “correct” side (the hammer) perceptual balance would hav