Jump to content
  • Advertisement

Ming-Lun "Allen" Chou

  • Content Count

  • Joined

  • Last visited

  • Days Won


Ming-Lun "Allen" Chou last won the day on May 18 2018

Ming-Lun "Allen" Chou had the most liked content!

Community Reputation

983 Good

1 Follower

About Ming-Lun "Allen" Chou

  • Rank

Personal Information


  • Twitter
  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, all. I've finally finished my bouncy bone tech addition to my Unity extension. I initially used Unity-Chan as my test model, but then I decided that I wanted to make my own test model, doubling as a mascot for this extension, so I began learning Blender and made UFO Bunny. [media]https://www.youtube.com/watch?v=bcpFaRJmDS8[/media] This is a silly video I made that shows off UFO Bunny being initially stiff and becoming bouncy when the bouncy bones feature is turned on. Later on, the video also demonstrates some other applications of the core bouncy tech. Here is a tech breakdown of how I made the bouncy bones effects: Data Definition & Construction First off, I build a chain of bone data by specifying a root bone, referenced by the bone's Transform component. Then I perform a breadth-first search to visit all of the transform's direct and indirect children. All the transforms visited are added to an array of bone data in the visited order. This way, when I want to iterate through the bone data, I can just go through the array once from start to end, guaranteed to always process parents first and then the children. Thus, when it's a child's turn to be processed and it needs to inherit proceed data from its parent (e.g. transform, bone chain length from root, etc.), the parent's data would have already been processed earlier in the array. Making the Bones Bouncy The core bouncy logic makes use of numeric springing (Intro / Examples / More Info). It is essentially a specialized type of soft constraint. When given a target value (e.g. float, vector, etc.), a numeric spring tracks its current value closer towards the target by each simulation step, with smooth-changing velocity. For bouncy bones, the core problem is how to compute the target transforms for the bones to be sprung to. I model the problem using "pose stiffness" and "length stiffness", each of which can be defined as a curve, where the input is the percentage of a bone's chain length from root v.s. the entire chain length, and the output is the stiffness percentage. I define pose stiffness as a child bone's desire to maintain its relative transform to its parent. So for a bone with full pose stiffness, its target transform is its relative transform to its parent appendeded to its parent's current transform spring values in world space. As for length stiffness, I define it as a child bone's desire to maintain its distance from its parent. So for a bone with full length stiffness, its target transform will maintain the same distance away from its parent's current transform spring values. Further, during each time step, I remove a percentage equal to length stiffness from a child bone's linear velocity parallel to the vector pointing from its parent to itself, so at full stiffness, there will be no linear velocity component that would alter the distance between the two. With the target transforms (positions & rotations) defined, the next step is to simulate the numeric springs to track the target values. For position, it's pretty straight forward: just apply numeric springing to individual vector components. However, with rotation, it's a bit tricky. There are multiple ways to represent rotation in 3D, the most common being (1) 3x3 transform matrices, (2) axis-angle vectors (direction is rotation axis & magnitude is rotation angle), and (3) quaternions. At first I chose (2) axis-angle vectors, because I've been accumulating effectors in this data format (more on effector's later). So my first try on rotation spring is basically a vector spring, where the vector is the axis-angle vector that gets converted to quaternions as a final step (formula here). At first, I thought it was working alright, until I noticed occasional kinks in UFO Bunny's ear (can be observed here when she spin around). Upon further investigation, it was due to the "360-degree wrap-around problem". Basically, a target axis-angle vector of length PI (180 degree) is equivalent to its negated vector, because rotating 180 degrees around an axis is the same as rotating 180 degrees around the opposite axis. So while the logical & spatial target rotation remains the same, the underlying target vector changes drastically, shooting the numeric spring in the other direction wildly. I tried various way to come up with a mathematically correct way to spring quaternions, but only got as close as to figuring out how to spring a point on a 3D unit sphere surface, and I haven't been able to extend that to 4D (yet). I thought the the 3D unit sphere spring was already too computationally heavy for this task, let alone trying to extend it to 4D, so I decided to cheat by just springing individual components of quaternions as if they are 4D vectors. And, the values are normalized before being read back from the spring into quaternions. It's not ideal. The rotation angle is not sprung in a mathematically correct way. But hey, it LOOKS OKAY, and it's computationally cheap. So that is my final solution. You can find my code for vector & quaternion springs here. Effector Accumulation Effectors are force sources that push or pull things around. I mentioned earlier that I implement this logic using angle-axis vectors. The reasoning comes from torque accumulation in physics simulation. No matter in which order you apply a series of forces, the resulting accumulated torque is always the same. In other words, torque application is order-independent. It is done by taking the cross product each force with the vector from the center of mass to the point of force application. Adding these cross products together gives the final total torque from all forces. I wanted effectors to apply rotational effects in the same order-independentway, so I accumulate their rotational effects by adding together the cross products of each effector's linear velocity vector and the vector from the affected object's center to the effector's center. Once I have the final accumulated rotational effect, I convert it to a quaternion and combined with a bone's target rotation, computed via the method mentioned above. Transform Update Lastly, I have the sprung transform results, and I have to apply them to the bone's Transform component. I can't just set the Transform component's position and rotation to the sprung values, because what values can I base off of to compute new target values in the next frame? So I cache the bone's original transform (position, rotation, and scale; why scale? more on that later) at the beginning of LateUpdate, compute the target transforms, update the transform springs, set the transform values to the sprung values, so the renderer would use these values for rendering, and then restore the transform back to the cached values post-render (using the Camera.OnPostRender() delegate). Squash & Stretch I cache scale as well because there's an additional feature that alters a bone's scale for volume preservation (squash & stretch). Imagine a rubber block that's stretched to 4 times its original length in the direction of, say, its local Z axis. If nothing is done to its scale, its volume would become 4 times as large. This is where squash & stretch comes in. To maintain volume, the block needs to be shrunk in the directions perpendicular to the stretching axis, in this case, its X and Y axes. The scale value to shrink to in each perpendicular axis is the square root of the reciprocal of the stretch amount; in this case, it is 1/2 in both the local X and Y axes (4 * (1/2) * (1/2) == 1). Same thing applies when the block is squashed to 1/4 its original length, its local scale in X and Y axes would need to become 2 to maintain volume. I have the information of each child bone's original distance from its parent. It is compared to the distance between the sprung child bone and its parent. If the sprung distance is larger than the original, it's stretched; otherwise, it's squashed. Bones are scaled accordingly to create the sense of volume preservation. That's all! Feel free to ask if you'd like me to elaborate on certain details or explain things I forgot to mention. And I hope you like the video!
  2. Hi, all: A new update to Boing Kit, my bouncy VFX plugin for Unity has just been released. Some more tech details: The solo behavior component reacts to linear and/or angular movement with bouncy effects simulated by numeric springing (more details in links below). Effectors can push reactors around either by position delta or impulse (linear and/or angular). The reactor field component is a solution to simulate fields of reactors, and allow samplers to pull reactor data out via trilnear interpolation. This is good for emulating massive number of reactors. The solo behavior, effectors, and reactors can be simulated in parallel using Unity's job system in Unity 2018.1 or newer. The reactor field can be simulated on GPU using compute shaders, but can also be simulated on the CPU as an alternative. The v1.1 update adds a new "propagatoin" feature, that simulates propagation of bouncy effects on reactor fields, which is good for simulating wind, water waves, shock waves, implosions, explosions, etc. It can also be either simulated on the GPU or CPU. You can also see these in action in the original V1.0 video Asset store page: https://assetstore.unity.com/packages/tools/particles-effects/boing-kit-135594 Here's some more info: Manual - http://longbunnylabs.com/boing-kit-manual/ Here's some technical explanation I wrote on the mathematical tool behind the effects, numeric springing: Part 1 - http://allenchou.net/2015/04/game-math-precise-control-over-numeric-springing/ Part 2 - http://allenchou.net/2015/04/game-math-numeric-springing-examples/ Part 3 - http://allenchou.net/2015/04/game-math-more-on-numeric-springing/
  3. Ming-Lun "Allen" Chou

    Boing Kit - Unity bouncy VFX tools

    Hi, all: Boing Kit, my bouncy VFX plugin for Unity has just been released. It can be used to create single-body & interactive multi-body bouncy effects. It utilizes the GPU to offload work from the CPU. It also makes use of Unity's job system (2018.1 or newer) for parallel computing. Asset store page: https://assetstore.unity.com/packages/tools/particles-effects/boing-kit-135594 Here's some more info: Details - http://longbunnylabs.com/boing-kit/ Manual - http://longbunnylabs.com/boing-kit-manual/ Here's some technical explanation I wrote on the mathematical tool behind the effects, numeric springing: Part 1 - http://allenchou.net/2015/04/game-math-precise-control-over-numeric-springing/ Part 2 - http://allenchou.net/2015/04/game-math-numeric-springing-examples/ Part 3 - http://allenchou.net/2015/04/game-math-more-on-numeric-springing/ My plan for next update is to add support for bouncy hierarchical/chained objects (e.g. bones) utilizing the same framework.
  4. Ming-Lun "Allen" Chou

    Get Angle of 3D Line and Rotate Polygon With Result

    Actually, atan2 is different from atan. Atan's range only spans one PI (2 quadrants), and atan2's range spans 2PI (all 4 quadrants). The separation of the individual parameters eliminates the sign ambiguity of the inputs. (e.g. All atan sees if called as atan(-1.0 / 2.0) is -0.5 being passed in. Does that mean (x, z) == (-1, 2) or (1, -2)?)
  5. Ming-Lun "Allen" Chou

    Modeling restitution

    For the games I've worked on, we use physics engines on things that we need to look "physically plausible" (destruction debris, tossed objects, etc.), but not on things that need be precise (e.g. character movement). In other words, we treat game physics engines as decorative tools, and we don't base core game mechanics on unsupervised physical simulation. For things we need to be precise controls over, we always use specialized logic outside of the physics engine. If a Newton's cradle never interacts with anything, then I'd just have it animated (simulating it physically is a waste of processing power). If it can interact with other objects, then I'd still have it animated, but physicalize it when touched by other objects. Once it's physically interacting with other objects, I doubt physical precision would be a concern, unless the Newton's cradle is the absolute major focus of the game.
  6. Ming-Lun "Allen" Chou

    What are the physics algorithms used in this?

    Not sure. But if all you need is what's shown in the video, then I'd say using a full-blown physics engine is an overkill. You probably don't need things like friction, restitution, torque, etc. I'd say all you need is a decent broadphase and specialized circle-vs-circle collision resolution (or peer avoidance steering behavior, which can be somewhat similar in implementation in this particular case). Edit: Maaaaybe also consider islanding if you need to parallelize processing of disjoint groups. I think that's more than enough in this use case.
  7. Ming-Lun "Allen" Chou

    Swing-Twist Interpolation (Sterp), An Alternative to Slerp

    I would say it depends on your intended look. As a general fallback, maybe align the twist axis with the direction in which the object's vertices are most spread out using techniques like PCA (principal component analysis), or simply the longest side of the object's OBB? As for the mechanical example, you gave a good example of where spliting the swing and twist portions in time is even more desirable. In this case, maybe a version of sterp that takes separate swing and twist interpolation parameters can be useful? Or if full control is needed, simply use the raw swing and twist from the decomposition function. public static Quaternion Sterp ( Quaternion a, Quaternion b, Vector3 twistAxis, float tSwing, float tTwist ) { Quaternion deltaRotation = b * Quaternion.Inverse(a); Quaternion swingFull; Quaternion twistFull; QuaternionUtil.DecomposeSwingTwist ( deltaRotation, twistAxis, out swingFull, out twistFull ); Quaternion swing = Quaternion.Slerp(Quaternion.identity, swingFull, tSwing); Quaternion twist = Quaternion.Slerp(Quaternion.identity, twistFull, tTwist); return twist * swing; }
  8. Here is the original blog post. Edit: Sorry, I can't get embedded LaTeX to display properly. The pinned tutorial post says I have to do it in plain HTML without embedded images? I actually tried embedding pre-rendered equations and they seemed fine when editing, but once I submit the post it just turned into a huge mess. So...until I can find a proper way to fix this, please refer to the original blog post for formatted formulas. I've replaced the original LaTex mess in this post with something at least more readable. Any advice on fixing this is appreciated. This post is part of my Game Math Series. Source files are on GitHub. Shortcut to sterp implementation. Shortcut to code used to generate animations in this post. An Alternative to Slerp Slerp, spherical linear interpolation, is an operation that interpolates from one orientation to another, using a rotational axis paired with the smallest angle possible. Quick note: Jonathan Blow explains here how you should avoid using slerp, if normalized quaternion linear interpolation (nlerp) suffices. Long store short, nlerp is faster but does not maintain constant angular velocity, while slerp is slower but maintains constant angular velocity; use nlerp if you’re interpolating across small angles or you don’t care about constant angular velocity; use slerp if you’re interpolating across large angles and you care about constant angular velocity. But for the sake of using a more commonly known and used building block, the remaining post will only mention slerp. Replacing all following occurrences of slerp with nlerp would not change the validity of this post. In general, slerp is considered superior over interpolating individual components of Euler angles, as the latter method usually yields orientational sways. But, sometimes slerp might not be ideal. Look at the image below showing two different orientations of a rod. On the left is one orientation, and on the right is the resulting orientation of rotating around the axis shown as a cyan arrow, where the pivot is at one end of the rod. If we slerp between the two orientations, this is what we get: Mathematically, slerp takes the “shortest rotational path”. The quaternion representing the rod’s orientation travels along the shortest arc on a 4D hyper sphere. But, given the rod’s elongated appearance, the rod’s moving end seems to be deviating from the shortest arc on a 3D sphere. My intended effect here is for the rod’s moving end to travel along the shortest arc in 3D, like this: The difference is more obvious if we compare them side-by-side: This is where swing-twist decomposition comes in. Swing-Twist Decomposition Swing-Twist decomposition is an operation that splits a rotation into two concatenated rotations, swing and twist. Given a twist axis, we would like to separate out the portion of a rotation that contributes to the twist around this axis, and what’s left behind is the remaining swing portion. There are multiple ways to derive the formulas, but this particular one by Michaele Norel seems to be the most elegant and efficient, and it’s the only one I’ve come across that does not involve any use of trigonometry functions. I will first show the formulas now and then paraphrase his proof later: Given a rotation represented by a quaternion R = [W_R, vec{V_R}] and a twist axis vec{V_T}, combine the scalar part from R the projection of vec{V_R} onto vec{V_T} to form a new quaternion: T = [W_R, proj_{vec{V_T}}(vec{V_R})]. We want to decompose R into a swing component and a twist component. Let the S denote the swing component, so we can write R = ST. The swing component is then calculated by multiplying R with the inverse (conjugate) of T: S= R T^{-1} Beware that S and T are not yet normalized at this point. It's a good idea to normalize them before use, as unit quaternions are just cuter. Below is my code implementation of swing-twist decomposition. Note that it also takes care of the singularity that occurs when the rotation to be decomposed represents a 180-degree rotation. public static void DecomposeSwingTwist ( Quaternion q, Vector3 twistAxis, out Quaternion swing, out Quaternion twist ) { Vector3 r = new Vector3(q.x, q.y, q.z); // singularity: rotation by 180 degree if (r.sqrMagnitude < MathUtil.Epsilon) { Vector3 rotatedTwistAxis = q * twistAxis; Vector3 swingAxis = Vector3.Cross(twistAxis, rotatedTwistAxis); if (swingAxis.sqrMagnitude > MathUtil.Epsilon) { float swingAngle = Vector3.Angle(twistAxis, rotatedTwistAxis); swing = Quaternion.AngleAxis(swingAngle, swingAxis); } else { // more singularity: // rotation axis parallel to twist axis swing = Quaternion.identity; // no swing } // always twist 180 degree on singularity twist = Quaternion.AngleAxis(180.0f, twistAxis); return; } // meat of swing-twist decomposition Vector3 p = Vector3.Project(r, twistAxis); twist = new Quaternion(p.x, p.y, p.z, q.w); twist = Normalize(twist); swing = q * Quaternion.Inverse(twist); } Now that we have the means to decompose a rotation into swing and twist components, we need a way to use them to interpolate the rod’s orientation, replacing slerp. Swing-Twist Interpolation Replacing slerp with the swing and twist components is actually pretty straightforward. Let the Q_0 and Q_1 denote the quaternions representing the rod's two orientations we are interpolating between. Given the interpolation parameter t, we use it to find "fractions" of swing and twist components and combine them together. Such fractiona can be obtained by performing slerp from the identity quaternion, Q_I, to the individual components. So we replace: Slerp(Q_0, Q_1, t) with: Slerp(Q_I, S, t) Slerp(Q_I, T, t) From the rod example, we choose the twist axis to align with the rod's longest side. Let's look at the effect of the individual components Slerp(Q_I, S, t) and Slerp(Q_I, T, t) as t varies over time below, swing on left and twist on right: And as we concatenate these two components together, we get a swing-twist interpolation that rotates the rod such that its moving end travels in the shortest arc in 3D. Again, here is a side-by-side comparison of slerp (left) and swing-twist interpolation (right): I decided to name my swing-twist interpolation function sterp. I think it’s cool because it sounds like it belongs to the function family of lerp and slerp. Here’s to hoping that this name catches on. And here’s my code implementation: public static Quaternion Sterp ( Quaternion a, Quaternion b, Vector3 twistAxis, float t ) { Quaternion deltaRotation = b * Quaternion.Inverse(a); Quaternion swingFull; Quaternion twistFull; QuaternionUtil.DecomposeSwingTwist ( deltaRotation, twistAxis, out swingFull, out twistFull ); Quaternion swing = Quaternion.Slerp(Quaternion.identity, swingFull, t); Quaternion twist = Quaternion.Slerp(Quaternion.identity, twistFull, t); return twist * swing; } Proof Lastly, let’s look at the proof for the swing-twist decomposition formulas. All that needs to be proven is that the swing component S does not contribute to any rotation around the twist axis, i.e. the rotational axis of S is orthogonal to the twist axis. Let vec{V_{R_para}} denote the parallel component of vec{V_R} to vec{V_T}, which can be obtained by projecting vec{V_R} onto vec{V_T}: vec{V_{R_para}} = proj_{vec{V_T}}(vec{V_R}) Let vec{V_{R_perp}} denote the orthogonal component of vec{V_R} to vec{V_T}: vec{V_{R_perp}} = vec{V_R} - vec{V_{R_para}} So the scalar-vector form of T becomes: T = [W_R, proj_{vec{V_T}}(vec{V_R})] = [W_R, vec{V_{R_para}}] Using the quaternion multiplication formula, here is the scalar-vector form of the swing quaternion: S = R T^{-1} = [W_R, vec{V_R}] [W_R, -vec{V_{R_para}}] = [W_R^2 - vec{V_R} ‧ (-vec{V_{R_para}}), vec{V_R} X (-vec{V_{R_para}}) + W_R vec{V_R} + W_R (-vec{V_{R_para}})] = [W_R^2 - vec{V_R} ‧ (-vec{V_{R_para}}), vec{V_R} X (-vec{V_{R_para}}) + W_R (vec{V_R} -vec{V_{R_para}})] = [W_R^2 - vec{V_R} ‧ (-vec{V_{R_para}}), vec{V_R} X (-vec{V_{R_para}}) + W_R vec{V_{R_perp}}] Take notice of the vector part of the result: vec{V_R} X (-vec{V_{R_para}}) + W_R vec{V_{R_perp}} This is a vector parallel to the rotational axis of S. Both vec{V_R} X(-vec{V_{R_para}}) and vec{V_{R_perp}} are orthogonal to the twist axis vec{V_T}, so we have shown that the rotational axis of S is orthogonal to the twist axis. Hence, we have proven that the formulas for S and T are valid for swing-twist decomposition. Conclusion That’s all. Given a twist axis, I have shown how to decompose a rotation into a swing component and a twist component. Such decomposition can be used for swing-twist interpolation, an alternative to slerp that interpolates between two orientations, which can be useful if you’d like some point on a rotating object to travel along the shortest arc. I like to call such interpolation sterp. Sterp is merely an alternative to slerp, not a replacement. Also, slerp is definitely more efficient than sterp. Most of the time slerp should work just fine, but if you find unwanted orientational sway on an object’s moving end, you might want to give sterp a try.
  9. Ming-Lun "Allen" Chou

    A Brain Dump of What I Worked on for Uncharted 4

  10. This post is part of My Career Series. Now that Uncharted 4 is released, I am able to talk about what I worked on for the project. I mostly worked on AI for single-player buddies and multiplayer sidekicks, as well as some gameplay logic. I’m leaving out things that never went in to the final game and some minor things that are too verbose to elaborate on. So here it goes: The Post System Before I start, I’d like to mention the post system we used for NPCs. I did not work on the core logic of the system; I helped writing some client code that makes use of this system. Posts are discrete positions within navigable space, mostly generated from tools and some hand-placed by designers. Based on our needs, we created various post selectors that rate posts differently (e.g. stealth post selector, combat post selector), and we pick the highest-rated post to tell an NPC to go to. Buddy Follow The buddy follow system was derived from The Last of Us. The basic idea is that buddies pick positions around the player to follow. These potential positions are fanned out from the player, and must satisfy the following linear path clearance tests: player to position, position to a forward-projected position, forward-projected position to the player. Climbing is something present in Uncharted 4 that is not in The Last of Us. To incorporate climbing into the follow system, we added the climb follow post selector that picks climb posts for buddies to move to when the player is climbing. It turned out to be trickier than we thought. Simply telling buddies to use regular follow logic when the player is not climbing, and telling them to use climb posts when the player is climbing, is not enough. If the player quickly switch between climbing and non-climbing states, buddies would oscillate pretty badly between the two states. So we added some hysteresis, where the buddies only switch states when the player has switched states and moved far enough while maintaining in that state. In general, hysteresis is a good idea to avoid behavioral flickering. Buddy Lead In some scenarios in the game, we wanted buddies to lead the way for the player. The lead system is ported over from The Last of Us and updated, where designers used splines to mark down the general paths we wanted buddies to follow while leading the player. In case of multiple lead paths through a level, designers would place multiple splines and turned them on and off via script. The player’s position is projected onto the spline, and a lead reference point is placed ahead by a distance adjustable by designers. When this lead reference point passes a spline control point marked as a wait point, the buddy would go to the next wait point. If the player backtracks, the buddy would only backtrack when the lead reference point gets too far away from the furthest wait point passed during last advancement. This, again, is hysteresis added to avoid behavioral flickering. We also incorporated dynamic movement speed into the lead system. “Speed planes” are placed along the spline, based on the distance between the buddy and the player along the spline. There are three motion types NPCs can move in: walk, run, and sprint. Depending on which speed plane the player hits, the buddy picks an appropriate motion type to maintain distance away from the player. Designers can turn on and off speed planes as they see fit. Also, the buddy’s locomotion animation speed is slightly scaled up or down based on the player’s distance to minimize abrupt movement speed change when switching motion types. Buddy Cover Share In The Last of Us, the player is able to move past a buddy while both remain in cover. This is called cover share. In The Last of Us, it makes sense for Joel to reach out to the cover wall over Ellie and Tess, who have smaller profile than Joel. But we thought that it wouldn’t look as good for Nate, Sam, Sully, and Elena, as they all have similar profiles. Plus, Uncharted 4 is much faster-paced, and having Nate reach out his arms while moving in cover would break the fluidity of the movement. So instead, we decided to simply make buddies hunker against the cover wall and have Nate steer slightly around them. The logic we used is very simple. If the projected player position based on velocity lands within a rectangular boundary around the buddy’s cover post, the buddy aborts current in-cover behavior and quickly hunkers against the cover wall. Medic Sidekicks Medic sidekicks in multiplayer required a whole new behavior that is not present in single-player: reviving downed allies and mirroring the player’s cover behaviors. Medics try to mimic the player’s cover behavior, and stay as close to the player as possible, so when the player is downed, they are close by to revive the player. If a nearby ally is downed, they would also revive the ally, given that the player is not already downed. If the player is equipped with the RevivePak mod for medics, they would try to throw RevivePaks at revive targets before running to the targets for revival (multiple active revivals reduce revival time); throwing RevivePaks reuses the grenade logic for trajectory clearance test and animation playback, except that grenades were swapped out with RevivePaks. Stealth Grass Crouch-moving in stealth grass is also something new in Uncharted 4. For it to work, we need to somehow mark the environment, so that the player gameplay logic knows whether the player is in stealth grass. Originally, we thought about making the background artists responsible of marking collision surfaces as stealth grass in Maya, but found out that necessary communication between artists and designers made iteration time too long. So we arrived at a different approach to mark down stealth grass regions. An extra stealth grass tag is added for designers in the editor, so they could mark the nav polys that they’d like the player to treat as stealth grass, with high precision. With this extra information, we can also rate stealth posts based on whether they are in stealth grass or not. This is useful for buddies moving with the player in stealth. Perception Since we don’t have listen mode in Uncharted 4 like The Last of Us, we needed to do something to make the player aware of imminent threats, so the player doesn’t feel overwhelmed by unknown enemy locations. Using the enemy perception data, we added the colored threat indicators that inform the player when an enemy is about to notice him/her as a distraction (white), to perceive a distraction (yellow), and to acquire full awareness (orange). We also made the threat indicator raise a buzzing background noise to build up tension and set off a loud stinger when an enemy becomes fully aware of the player, similar to The Last of Us. Investigation This is the last major gameplay feature I took part in on before going gold. I don’t usually go to formal meetings at Naughty Dog, but for the last few months before gold, we had a at least one meeting per week driven by Bruce Straley or Neil Druckmann, focusing on the AI aspect of the game. Almost after every one of these meetings, there was something to be changed and iterated for the investigation system. We went through many iterations before arriving at what we shipped with the final game. There are two things that create distractions and would cause enemies to investigate: player presence and dead bodies. When an enemy registers a distraction (distraction spotter), he would try to get a nearby ally to investigate with him as a pair. The closer one to the distraction becomes the investigator, and the other becomes the watcher. The distraction spotter can become an investigator or a watcher, and we set up different dialog sets for both scenarios (“There’s something over there. I’ll check it out.” versus “There’s something over there. You go check it out.”). In order to make the start and end of investigation look more natural, we staggered the timing of enemy movement and the fading of threat indicators, so the investigation pair don’t perform the exact same action at the same time in a mechanical fashion. If the distraction is a dead body, the investigator would be alerted of player presence and tell everyone else to start searching for the player, irreversibly leaving ambient/unaware state. The dead body discovered would also be highlighted, so the player gets a chance to know what gave him/her away. Under certain difficulties, consecutive investigations would make enemies investigate more aggressively, having a better chance of spotting the player hidden in stealth grass. In crushing difficulty, enemies always investigate aggressively. Dialog Looks This is also among the last few things I helped out with for this project. Dialog looks refers to the logic that makes characters react to conversations, such as looking at the other people and hand gestures. Previously in The Last of Us, people spent months annotating all in-game scripted dialogs with looks and gestures by hand. This was something we didn’t want to do again. We had some scripted dialogs that are already annotated by hand, but we needed a default system that handles dialogs that are not annotated. The animators are given parameters to adjust the head turn speed, max head turn angle, look duration, cool down time, etc. Jeep Momentum Maintenance One of the problems we had early on regarding the jeep driving section in the Madagascar city level, is that the player’s jeep can easily spin out and lose momentum after hitting a wall or an enemy vehicle, throwing the player far behind the convoy and failing the level. My solution was to temporarily cap the angular velocity and change of linear velocity direction upon impact against walls and enemy vehicles. This easy solution turns out pretty effective, making it much harder for players to fail the level due to spin-outs. Vehicle Deaths Driveable vehicles are first introduced in Uncharted 4. Previously, only NPCs can drive vehicles, and those vehicles are constrained to spline rails. I helped handling vehicle deaths. There are multiple ways to kill enemy vehicles: kill the driver, shoot the vehicle enough times, bump into an enemy bike with your jeep, and ram your jeep into an enemy jeep to cause a spin-out. Based on various causes of death, a death animation is picked to play for the dead vehicle and all its passengers. The animation blends into physics-controlled ragdolls, so the death animation smoothly transitions into physically simulated wreckage. For bumped deaths of enemy bikes, we used the bike’s bounding box on the XZ plane and the contact position to determine which one of the four directional bump death animations to play. As for jeep spin-outs, the jeep’s rotational deviation from desired driving direction is tested against a spin-out threshold. When playing death animations, there’s a chance that the dead vehicle can penetrate walls. A sphere cast is used, from the vehicle’s ideal position along the rail if it weren’t dead, to where the vehicle’s body actually is. If a contact is generated from the sphere cast, the vehicle is shifted in the direction of the contact normal by a fraction of penetration amount, so the de-penetration happens gradually across multiple frames, avoiding positional pops. We made a special type of vehicle death, called vehicle death hint. They are context-sensitive death animations that interact with environments. Animators and designers place these hints along the spline rail, and specify entry windows on the splines. If a vehicle is killed within an entry window, it starts playing the corresponding special death animation. This feature started off as a tool to implement the specific epic jeep kill in the 2015 E3 demo. Bayer Matrix for Dithering We wanted to eliminate geometry clipping the camera when the camera gets too close to environmental objects, mostly foliage. So we decided to fade out pixels in pixel shaders based on how close the pixels are to the camera. Using transparency was not an option, because transparency is not cheap, and there’s just too much foliage. Instead, we went with dithering, combining a pixel’s distance from the camera and a patterned Bayer matrix, some portion of the pixels are fully discarded, creating an illusion of transparency. Our original Bayer matrix was an 8×8 matrix shown on this Wikipedia page. I thought it was too small and resulted in banding artifacts. I wanted to use a 16×16 Bayer matrix, but it was no where to be found on the internet. So I tried to reverse engineer the pattern of the 8×8 Bayer matrix and noticed a recursive pattern. I would have been able to just use pure inspection to write out a 16×16 matrix by hand, but I wanted to have more fun and wrote a tool that can generate Bayer matrices sized any powers of 2. After switching to the 16×16 Bayer matrix, there was a noticeable improvement on banding artifacts. Explosion Sound Delay This is a really minor contribution, but I’d still like to mention it. A couple weeks before the 2015 E3 demo, I pointed out that the tower explosion was seen and heard simultaneously and that didn’t make sense. Nate and Sully are very far away from the tower, they should have seen and explosion first and then heard it shortly after. The art team added a slight delay to the explosion sound into the final demo. Traditional Chinese Localization I didn’t switch to Traditional Chinese text and subtitles until two weeks before we were locking down for gold, and I found some translation errors. Most of the errors were literal translations from English to Traditional Chinese and just did’t work in the contexts. I did not think I would have time to play through the entire game myself and look out for translation errors simultaneously. So I asked multiple people from QA to play through different chapters of the game in Traditional Chinese, and I went over the recorded gameplay videos as they became available. This proved pretty efficient; I managed to log all the translation errors I found, and the localization team was able to correct them before the deadline. That’s It These are pretty much the things I worked on for Uncharted 4 that are worth mentioning. I hope you enjoyed reading it.
  11. Ming-Lun "Allen" Chou

    2D Lighting System in Monogame

  12. Ming-Lun "Allen" Chou

    GDC Social Tips

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!