• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

haegarr

Members
  • Content count

    3989
  • Joined

  • Last visited

Community Reputation

7372 Excellent

About haegarr

  • Rank
    Contributor
  1. Perspective mode cameras have a view volume, too. The volume is a frustum of a pyramid (as opposed to a cuboid for orthographic mode cameras). It is "just" a matter of aligning the volumes.   In Unity a perspective mode camera has the parameter "field of view", which is the view's angle in degrees along the local y axis. Using the aspect ratio of the view, the corresponding angle for the local x axis can be calculated. Now, since the horizontal view angle is between the left side of the view and the right side, and exactly on the half between them is the line of sight, the angle between the lines of sight of two aligned cameras is just the horizontal view angle.   So create a new local co-ordinate system, put both cameras inside it (positioned at the local origin), rotate the "left" one by half the horizontal view angle to the left around the local y axis, rotate the "right" one the same but to the right, and let the camera controls alter the entire co-ordinate system.   Besides the simple approach above, more sophisticated things can be done by adapting the both projection matrices.
  2. At least possible:   a) Fake solution: Take the 4 corner positions of the mesh where the image is mapped onto. Translate them to be centered around the camera space origin, if needed. Ensure that they have a depth co-ordinate. Transform them with a perspective projection. Use the result in orthogonal space.   The above solution is a fake because it does distribute the image pixels evenly, i.e. it does not consider death shortening for the pixels. However, it will probably not be noticed, especially for an animation with decent speed.   b) Real solution: Do a render-to-texture with a full perspective projection, and render the result with an orthogonal projection onto the standard target.   This method will do depth shortening, but it more complex.
  3. Such an attempt to solve the problem is discussed here: Synchronized Behavior Trees   (BTW: That article also states my own opinion, that BTs are fine to execute elsewhere made decisions :))
  4. BTW: I'm *not* running Unity3D at the moment, so we need to solve the problem on a more theoretical basis.   to A: What I asked for is the ratio of the bounding box size to the area of damage. If the radius of the sphere that locates the damage is, say, 1.0, while the bounding is 2 by .4 by .7, then every applied damage has an influence on at least half of the vertices. Hence my question: Is the ratio low enough?   to B: Think of the radius being 0.3 when damage is inflicted to the front right corner of the car. The bounding box's size in x direction is 2, and it is covered by 2 faces along from front to back. Only the vertices at the front are located within the sphere of damage. What we would expect is that the region 0.7 to 1.0 along x direction is damaged. However, the mesh's face ranges from 0.0 to 1.0 in that direction, and hence the deformation appears over half of the length of the car although the relatively small damage sphere.   to C: "Normalized distance" means that it ranges from 0.0 to 1.0. Since the unnormalized value ranges from 0.0 to radius, dividing by radius is the calculation to be done for normalization.   to D: Well, when being done "correctly", then this kind of calculation will be removed totally.     It seems me worth to clarify first how exactly an inflicted damage should look like...
  5. Several things:   a) What is the ratio of the size of the bounding box of the mesh (enough "of" for now ;)) and the damage infliction?   b) From the video it seems me that the mesh has a too low resolution for deformation. Either use a higher resolution in advance or refine resolution locally just before applying the damage.   c) Damage infliction is done by applying a translation in direction of and proportional to the position of the vertex. That means that the intentional same damage factor applied to a vertex farther away from the origin has a higher impact, and that the bumps are not oriented correctly. What you probably want is to support a direction of damage (i.e. the damage is input as a vector with its length denoting the damage amplitude), scale that with one minus the normalized distance from the impact point, and add that to the vertex position.   d) Using a translation in range [-1,+1] gives inside and outside bumps; perhaps not what you want.
  6. For each dimension, with box A ranging from Amin to Amax, and B from Bmin to Bmax, then it should be sufficient to calculate: d = 0 if Amax < Bmax then d = Bmax - Amax // shifting A by a positive value to align with B's max edge if Amin > Bmin then d = Bmin - Amin // shifting A by a negative value to align with B's min edge (w/o having tested its correctness)
  7. What I wanted to say: Problems like the one mentioned in the OP may occur because the one tool at hand is not suitable for the job. Splitting the problem into sub-problems and using the correct (whatever that means in the given situation) tool for each one should be preferred.     But then none of the sequences on the way to the current node will be evaluated, because they are all 'running', and the BT won't be able to react to changes...? And same with Selectors.   Nodes work accordingly to their type. A selector node is allowed to work another way than a sequence node.
  8. When nodes are entered they are handled considering whether or not they are already running. A sequence node, for example, would not restart if it is running. I think that would do the trick.   However, the found issues are not new: When I once read about BTs they first sound to be a wonderful tool. Later on during studying existing implementations I saw 2 major drawbacks (where the 2nd one is not inherent to BTs by themselves):   a) BTs need crutches to get them work (halfway?) as expected, especially w.r.t. reaction. For example I saw "interruptable" nodes.   b) The same BT is used to do everything from decision making down to directly controlling the animation and placement of models. That's mixing of responsibilities, like once done with the nowadays overpowered scene graphs.   The conclusion I drew from this is once again: There is no tool to rule them all; instead, use the tool that is appropriate for solving the given single task. In detail:   a)  The strict left-to-right execution is not a good thing for decision making. In other words, decision making should be done by using another tool (currently I favor utility based AI for this thing).   b) BTs are good for executing (more or less) canned behaviors.     These are just my 2 Cents.
  9. In my approaches transitions are generally modeled as states, and changing states happens instantaneously.   The example in the OP may then be modeled with a) a stable idle state; b) a transitional state that replays a canned animation to bring the skeleton into a defined motion phase with a defined speed; c) a transitional state that interpolates over time from the current speed to the goal speed, setting appropriate animation clips for blending; d) a stable state that continues the walk cycle with the reached speed.   Alternatively to c) and d) there could be: e) a stable state that regulates the speed in dependence on the difference between current speed and goal speed, hence running a continuous blending.   However, such a state machine is not only controlling animation since it also controls locomotion. Its granularity w.r.t. states is derived from animation needs, and hence does not reflect the difference between e.g. "still" and "moving" very well. When understood as component of a locomotion system, then states like b) can be seen as synchronization points where control is granted for a short time to the animation system. When control is returned, the state machine immediately enters state e).
  10. The mentioned keywords are "rotation matrix decomposition". Putting these into an internet search engine provides you with explanations, formulas, and also code.   However, as is also already indicated, the solution depends on how the axes and order of rotations is defined (together giving the composition of the total rotation matrix). Even the terms yaw / pitch / roll are not ever used clearly. So copying any given code may give results that are wrong for your situation.   This stuff is a conversion from spherical co-ordinates to cartesian co-ordinates (although using the greek letter rho for the radius / distance is somewhat misleading, since greek letters are by convention often used for angles). There is of course a reverse way as described e.g. here.   However, this is not the same as Euler angles, because spherical co-ordinates describe a position while Euler angles describe an orientation! Notice the mismatch of the radius / distance on the one side and the roll on the other side.   The view matrix is probably not even what you want to decompose. Traditionally the view matrix is the transformation from the world space into the camera space.   Not offending, but Alberth's is really what you should consider. Without an understanding of what the used code does, your project is about to fail rather sooner than later.     So ... what exactly is the "post-arcball mathematics" you want to do?
  11. I think the following is meant. Perhaps not a totally correct explanation though...   Applying a non-identity rotation maps a couple of points onto themselves. These points build a line which is often called the axis of the rotation. Further, a rotation maps the point zero onto zero due to its multiplicative nature. In other words, the axis of rotation ever passes through zero. If this should not be the case, you need to temporarily shift the space so that the axis correctly passes through zero, apply the rotation, and undo the shift.     I.e. with p being a point where the axis should pass through, then  T( p ) * R * T( -p )  would do the trick, where T denotes a translation matrix and R the rotation matrix.
  12. It is there, although a bit hidden: Below the article from the download links.
  13. While not knowing how the DirectX TK works …; from a principle point of view: Separate different tasks into different objects! Well, from your question it seems me that your experience is low (no offending), and hence a full blown solution may be too heavy. So feel free to simplify the one or other aspect … but you should at least wrap your head around the principles, because almost all solutions suggested here in the forums are of that complex kind (due to a reason, of course).   a) The sprite sheet is a resource. It may contain clips of several sprites and/or clips of several motions, so it is often shared. To have a clearly defined responsibility for the lifetime of a (shared) resource, it is usually managed by some interrelated objects. Interrelated objects for a specific task are often called a sub-system. Here we can speak of the resource management sub-system. Its front object, i.e. the one that is "seen" from the others, provides an interface to get a named resource. If that happens, then a cache (a simple map, for example) is requested whether it already contains the resource. If not, then a loader is requested to load the resource from a file, and the result is both stored in the cache and returned to the calling object outside.   So: The component should not load the sprite sheet. It should request the resource management sub-system to return access to a sheet with a name that is known to the sprite component.   b) What exactly is rendered for the sprite? Exactly one image clip. Next frame it may be the same clip or another clip. So the visual component is just a specification which one image clip is currently active, i.e. it is like a variable that stores a reference to the sprite sheet and the clip co-ordinates. This "variable" can be set by e.g. the animation sub-system, so the look of the sprite will change over time. However, when it is time to render the sprite, the rendering just takes the relevant data from the visual component and pushes that (perhaps after some preparation) to the render batch. Which is the correct batch? The sprite does not know, because the render sub-system is out of its scope (again: separate the tasks).   So: The component should not have a member to the batch. At most, if it really has a render() method, the batch should be passed as argument to it. (However: In my understanding, the visual component would provide all necessary data but does not render by itself.)   c) Regarding the coupling to DirectX TK: It depends on the goal. If the goal is getting something done, and the result and any "derived work" is restricted to an area where the TK is available, then its IMO okay. After been done, the next trial can then be a bit more abstract, if you wish.
  14. It's lesson #48 in the NeHe archive here.   The current axis and angle exist temporarily during dragging the mouse, but in the end all rotations (means a couple of click-drag-release cycles) are accumulated in a rotation matrix. If the overall rotation angles should be extracted from the matrix, then some kind of matrix decomposition has to be done. Such a decomposition depends on how one defines a formula that expresses the composition.
  15. As Scouting Ninja denoted, you need to have associated data for the sprite. Things I used in the past for each sprite in a data driven approach are:   * A unique ID. * A references that denotes the sprite sheet to be used. * A frame which denotes the pixel region on the sprite sheet. * A base point, given relative to the frame on the sheet. * A transition table with   ** a condition,   ** the ID of the following sprite,   ** a distance that denotes how to alter the world / screen position when using this transition.   Regarding the base point: It defines the one point that is mapped to the world / screen position of the sprite. For example, each "legged figure on the ground" sprite has its base point between the feet on the ground. The base point need not fit into the sprite's frame; e.g. the base point may be on the ground during jumping.   Regarding the transition condition: Of course, player input (in case of PC) oder AI decisions (in case of NPC) plays a role here, and automatic transitions are possible. But also timing can be incorporated here, e.g. input need to occur in a defined time range. Even entity or world state flags can be used.   For advancing look at the transition table of the currently selected sprite, pick the first transitions that has its condition computed to be true, alter the world / screen position accordingly to the offset, and set the referenced follower as the new selected sprite.   For rendering fetch the base point of the currently selected sprite, multiply it with the "sprite-to-screen" pixel ratio, add the result to the current world / screen position. Fetch the size of the sprite's frame, scale it by the same pixel ratio, add it to the position computed beforehand, and use this for the vertices of the image carrying rectangle. For each vertex, set the belonging frame co-ordinates for the texture access.   The above is somehow course, of course; the details need to be elaborated with respect to the engine.