Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Oct 2005
Offline Last Active Today, 12:35 AM

#5283606 Location tracking across scenes...

Posted by on 26 March 2016 - 02:20 PM

I once had that problem, too. I solved it the following way (described here using matrix transformations for column vectors):


An exit is a collision area with an own placement CX. It is linked to an entree which also has an own placement CE. When an actor enters the exit area, then the actor's global placement Ag is transformed into the exit's local coordinate system as


   AL := CX-1 * Ag


This placement, although being computed local to the exit, is directly re-interpreted as placement local to the entree, i.e.


   AL = CE-1 * Ag'


resulting in the placement behind the entree being


   Ag= CE * CX-1 * Ag


Here the placement C of an exit or entree could be modeled using translation, rotation, and scaling as usual


   C := T * R * S


so that different scales, orientations, and positions can be considered when desired.

#5273330 Order of transformation confuse

Posted by on 30 January 2016 - 03:36 AM

The matrix product is associative. That means you can put (decently) pairs of parentheses around the individual term at your desire. Now, when you choose parentheses so that you start with the product including the vector, e.g. like so for column vectors (i.e. the vector is on the right and the matrix on the left, typical for OpenGL)

    M1 * M2 * ( M3 * v )

and continue to the outer terms, like so

   = M1 * ( M2 * ( M3 * v ) )

then the parentheses tell you about the locality of the transformation. The wording for this is often "first apply M3, then M2, and then M1", although that wording is imprecise.
Notice that this works for row vectors as well
   ( ( v' * M3' ) * M2' )* M1'
So, if you want to apply scaling to the model, rotate the scaled model, and translate the rotated/scaled model,** and you use column vectors, then the formula with parentheses looks like
    T * ( R * ( S * v ) )
what, obviously, corresponds to your 2nd variant of code.
**Notice please that I've chosen a wording here that is not as blurry as "first … then …" ;)

Opengl reads in reverse order ...

As said above, "order" without further context is misleading here. For example, you can calculate

    ( ( M1 * M2 ) * M3 ) * v

as well and get the same result.

#5268824 Vector4 W Component

Posted by on 02 January 2016 - 08:02 AM

If the W is 1, the vector is treated as a point when being multiplied by a matrix. That is, it will be translated, rotated and scaled!

I'm used to the terms "position vector" (if w!=0) and "direction vector" or "difference vector" (if w==0).


Notice that w can be anything, and that any vector with w!=0 denotes a position.


I assume not length.... It would be odd if W took part in the Length operation as the vector (2, 2, 2, 0) and the point (2, 2, 2, 1) would have different results.

This is because a position has no length. What you want instead is the distance of the position from a reference point, e.g. (0,0,0,1), hence computing

    | (2,2,2,1) - ( 0,0,0,1 ) | = | (2,2,2,0) |


You should make a distinction between positions and directions based on axioms like these:

  a) position + direction =: position

  b) hence position - position =: direction (or difference)

  c) position + position =: positionunnormalized

  d) direction + direction = direction


So, should i just ignore W for these operations: Addition, Subtraction, Scalar Multiplication, Dot Product, Cross Product, Length and Projection?

Of course not! Certain operations like the cross-product do not make much sense in 4D. The length is something meaningful for direction vectors only (see above).


But then what happens when a Vector with a W of 0 and a Vector with a W of 1 are added? Point + Vector = Point makes sense in my head. But Vector + Point = ? That doesn't really make much sense...

Why should "Point + Vector" be different from "Vector + Point"? Vector addition is commutative.


Does this make sense to anyone else? How do you handle the W component of a Vector4?

A Vector4 has the explicit distinction of position vs. direction semantics. This absolutely makes sense. 

#5268564 Xcode and OpenGL shaders

Posted by on 31 December 2015 - 02:15 AM

You have to find out how the compiled program accesses the files. Where does it look for them? Are there absolute paths in use, or paths relative to the application, the install directory, the user's home directory, or whatever? Only if that is identified, you can work on a strategy to install the files where they are expected.

#5268442 How would you create a Weapons class?

Posted by on 30 December 2015 - 04:07 AM

It depends.

^^ This ;) Letting external forces (deadlines, budget, objectives, engine requirements, …) aside, then of course data driven composed game objects is the way to go.


I would even say that it isn't necessary to have a class "Weapon". Isn't damage just an effect on a specific stat, induced by using an item in a specific way? Let's say the engine supports an StatActionEffect component class (perhaps inheriting ActionEffect). The designer adds an instance of StatActionEffect to a game object that represents an item. The instance is parametrized with e.g. intensity=-40, target=collider, stat=health. Let's further say that the player's game object is enabled to use item game objects (here isn't the place to elaborate on this, but you can ask if you want to hear more about this). When the item is successfully used during gameplay, then the existence of an ActionEffect component causes its application, and voila: a damage is applied. (Of course, when implementing such a thing, it is not really such easy.) Then anything, even a thrown vase, may be a "weapon".


Notice that, with the same reasoning, a class "Item" is also not strictly necessary. Thing is, one must stop thinking that OOP classes cardinally represent physical objects.

#5268176 AI for an active time based combat system

Posted by on 28 December 2015 - 02:49 AM

The various AI techniques have their pros and cons. Your question shows that your current problem is the reasoning, i.e. the selection of the best option under consideration of various factors.


The strength of both FSMs and BTs is IMO not reasoning. FSMs are less prominent today because they become unmaintainable quickly when the number of states and/or transition grow. However, reasoning in FSMs is implemented in the selection of one of the available transitions. Transitions are labeled with the condition that need to be met, and the conditions result in a boolean value. Strictly seen there must be exactly one transition showing a true. That makes the formulation of conditions for you very hard. Selection in a BT has many flavors. None of the standard ones solve your problem. BT selectors are IMO mainly good to control behavioral proceedings.


Now, another AI tool is utility. Utility based AI is good at reasoning like you want, because

* it is able to consider any amount and kind of factors for each option,

* looks at all options in parallel when deciding which one to select,

* is able to give multiple results in case that an option can e.g. be applied to multiple targets.


My favorite solution is a tree of selectors and actions, where each selector can be a utility-based reasoner, a BT node, a planner, or even a FSM, whatever matches the need at that specific level of AI. Let's say a top level utility-based reasoner has selected an option due to its best fitting utility value. The action sequence of the option is hence processed. If the behavior of the selected option can be modeled as sequence than all is fine. If not, then place e.g. a BT selector node (or even sub-tree) therein. Or, if the action need to be planned due to runtime circumstances, then place a planner therein.


However, implementing all this in entirety is much stuff, perhaps too much for your game. I recommend to start with a tree structure in mind and to add kinds of selectors and actions on need. A utility-based reasoner as first selector kind is obviously fine. Supporters for utility-based AI are around here, probably most notably our Dave himself. So don't hesitate to ask if you have more questions.

#5267800 Replacing ECS with component based architecture, or traditional OOP, or somet...

Posted by on 24 December 2015 - 10:58 AM


There are many other similar examples, making something that jumps, something that flies, something that swims, but then nightmares occur when you need to make JumpingAndFlying, JumpingAndSwimming, FlyingAndSwimming, JumpingAndFlyingAndSimming ... Don't go there.

The rookie might consider using multiple inheritance here. My advice would be to steer clear of that train wreck too smile.png


Or maybe recognizing that so much inheritance is bad to maintain and then ending with a god class, what is also the wrong way.

#5267762 Bvh player - blank!

Posted by on 24 December 2015 - 03:08 AM

Does anyone know what this part does?

That code computes the orientation matrix from a forward direction "dir" and a principal upward direction "up": Given 2 (normalized) vectors "up" and "dir", the vector "side" is computed as the cross-product of both and then normalized. Then "up" is recomputed, again using the cross-product. In the end m[16] is an orientation matrix with its z vector pointing along the direction of "dir", and its y vector pointing roughly in the direction of "up".


side = up x dir

length = length( side )

if length is very short then side = vector( 1, 0, 0 ), length = 1

side = side / length  # normalization

up = dir x side

m = matrix( side, up, dir )


 It may be used for the orthonormalization of an orientation matrix. It may be part of a "look at" function.

#5267639 Replacing ECS with component based architecture, or traditional OOP, or somet...

Posted by on 23 December 2015 - 08:06 AM

Well, your thread title does not make much sense. ECS is a buzzword in game development with no concrete definition. It just says that there are entities (also called game objects) which are composed of components. As such an ECS is a component based architecture. Furthermore, composing is the second programming idiom besides inheritance, and both are idioms of traditional OOP. Its just a fact that inheritance was overemphasized in many books and internet sites, so that composition was not recognized to the degree it should.


When you ask whether you should step away from composition and go with inheritance than think again. There is a reason why your trials in ECS world appear to be so smooth. Both composition and inheritance are tools, and you should use the right tool for the right problem.


Another thing to consider is the difference between code and data. It is much more elegant to make differences in game objects by parametrization (i.e. giving them different data) instead of inheriting another class. For example, the look of a spell is a different set of animation data and its damage is just a different value. So there is no need to make a Spell base class and inherit a FireBolt and an IceBolt class from it; the base class is sufficient (in this example; of course, situations may arise where having 1 or 2 inheritance levels would be okay).


The existence of "systems" to handle the components is a step that makes the management of all of the components easier. This is because an entity's components can be registered with the belonging systems when the entity is instantiated and added to the game world. Notice that components, although named commonly so, are ver different in their meaning (e.g. a Placement component and a SpellSpawner component). Due to their inherent differences, it is not logical to manage them the same way. If there are systems to manage them, then each system can be implemented in its own optimal way.


So, for me, going that way has no real alternative if you plan to do something more complex, especially if you don't want to re-program for each single game design decision. Of course, a small and isolated example like your scenario can be implemented in any other way, too.

#5267624 dealing with a multiplicity of AI states

Posted by on 23 December 2015 - 05:02 AM

From my non-professional point of view: There is the layer of reasoning, where IMHO a utility-based selector fits very well. There is the layer of executing behaviors, where a BT fits well in case that the behavior can be pre-defined, or a planner in case that the behavior is assembled at runtime. (Although the borderlines are blurry; of course, there are "work-arounds" to solve problems of not well fitting kinds in each of the AI solutions.)


Let's use a warrior that becomes facing a wild horde of enemies. A utility-based reasoner investigates some options, one of them being "flee_in_panic". This option occurs to have the highest utility, and hence is selected. The option's action part is in fact a BT sequence node. The encoded behavior is a "drop_weapon" action, followed by a "turn_around" action, followed by a "flee_from" action. While the former 2 actions are more or less direct instructions to the motion / animation layer, the 3rd action is a motion planner node. This node requests a path from the path finder sub-system, and starts executing it. For each path segment the appropriate sub-behavor is selected due to hints given with the path segment.


When looking at such a tree, the reasoning is done on the entirety of options at once, where utility-based selection shines. The designer wants the agent to drop the weapon, rise it hands, turn suddenly around, … hence an a-priori known behavior. Here a BT is useful (but that does not mean that a BT is restricted to fix behaviors; it is just so that a BT is fully pre-defined). Running along a path, however, is depending on the path characteristics which may be known at runtime first. Hence using a more flexible device like a planner seems appropriate.

#5266061 ECS: Systems operating on different component types.

Posted by on 12 December 2015 - 02:13 PM

Many of such "problems" disappear immediately if one thinks of a standard solution (here e.g. simple lists if only few objects are involved, space partitioning, influence maps, …) encapsulated within systems with appropriate interfaces, and understand components as data suppliers. There is no enforcement that component structures used internally are the same as those used externally of sub-systems.


For example, when an entity with an "interactive usage" component is instantiated, the component causes the "interaction sub-system" to prepare its internal representation so that it considers the component or its meaning at least. Because it provides an interface that allows to query, other sub-systems can ask for, say, any interaction possibility within a given range. When the player controller detects the input situation "interactive use" when running its update, it queries the interaction sub-system for suitable devices.

#5254230 Smoothing me some normals.

Posted by on 27 September 2015 - 03:14 AM

// set all normals to zero
for each vertex normal (n)
  n = 0,0,0

// add in each face normal to each vertex normal
for each face
  fn = calculate face normal 
  for each vertex normal in face (vn)
     vn += fn

// normalize normals
for each vertex normal (n)

This algorithm calculates a vertex normal by averaging normals of surrounding faces. While there is nothing inherently wrong with it, one usually wants to consider a weight so that the face normals have differently rated influence. A typical weights are the face areas, another is the angle of the face at the vertex of interest.

#5254121 Intersect ray with heightmap

Posted by on 26 September 2015 - 06:00 AM

One alternative to a brute force method would be to reduce the set of triangles to possible candidates. Another possibility is to use approximations first and compute more costly methods only if the approximation says so. For example:


A height map has a regular grid of z samples in the x/y plane. When seen from above, it looks like a regular arrangement of quadratic cells. Each cell has 8 neighbors (or less if being placed at an edge of the map). The ray, also seen from above, passes through these cells. So you can handle this as a 2D problem first: Start at the cell which contains the camera, calculate through which of the 4 edges the ray leaves the cell, determine the neighboring cell at that edge, and continue the same from there. In this way yo can iterate those cells that are touched by the ray. Now, before hopping to the next cell, determine whether the ray passes through the ground of the cell. If not, then go to the next cell; otherwise the cell of interest is found.


The ground test can be optimized, too. If you have the minimum and maximum height value of the current cell, and you have the entry and exit heights of the ray, then a first test would be a simple interval overlapping test (like a 1D bounding volume hit). Notice that the entry height of a cell is the same as the exit height of the previously visited cell, so passing this value would be beneficial. Notice that the entry / exit heights again are computed by ray / plane intersections, but the planes are axis aligned and hence the intersections are easier to calculate.

#5253083 Skeletal animation is hella slow

Posted by on 19 September 2015 - 01:00 PM

I'm sorry but I still don't get this. When you say "animation sub-system" do you mean my AnimationController class?
I implemented it in a way that's every AnimationData and SceneNode* pair have a matching Timer object for their animation.
Can you explain further?

With "animation sub-system" I do not name a specific one but a group of collaborating objects, and one of them is suitable to fulfill the discussed task. A class named AnimationController usually does not what I mean, but maybe yours do so.


For further explanation, let me begin with the game loop. The game loop is commonly organized as a repeated execution of a defined sequence of tasks. Game time advancing, input collecting, player character control, animation playback, physics simulation, … and finally rendering are typical tasks. From the point of view of the game loop, these things are high level tasks that, for the sake of separation, are usually associated with various sub-systems like input, player control, animation, physics, rendering, and so on. So the game loop has a list of such tasks, and it calls each one by an update(time, delta) or so. This is also true for the animation sub-system; it may look like

    animation->update(time, delta);

meaning "update all animations in the scene". So the routine iterates the running animations and updates each one (and here I would expect an AnimationController, one per animation). Now, this iteration should not go through the scene tree and look out for animated nodes. Instead it should go through an internal list. This list holds nothing but each and every animated node. Iterating it means that every found node is known to be animated. No need to determine this property, no need to skip "inanimate" nodes. Further, the animation sub-system has the opportunity to order the nodes as is most suitable for, so not being dependent on the order in the scene tree.

#5252868 Smoothing me some normals.

Posted by on 18 September 2015 - 06:54 AM

Best way to smooth normals. I know of one method, not sure if it's very memory efficient.

The informations you gave us are very sparse.


1. Which criterion do you use to qualify what good, better, and best is in this context?

2. What does "smooth normals" mean? An average of normals as it occurs when computing a vertex normal from the normals of surrounding faces? Or an interpolation of normals as it occur to fill gaps between samples? Or realigning existing normals to appear smoothly arranged? Or something else?

3. Which method is the one you know?

4. How is normal "smoothing" meant to be memory efficient / inefficient?