Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Oct 2005
Offline Last Active Today, 12:42 AM

#5289569 <OpenGL/gl.h> Include Failing

Posted by haegarr on 01 May 2016 - 09:20 AM

It is not exactly clear to me what build system you are using. It is Xcode itself? If yes, then adding "/System/Library/Frameworks/" should not be needed.


However, the frameworks therein are runtime components. The headers are development components - the are not bundled with the runtime components. Instead, they are bundled with the SDKs. For example, the SDK for Mac OS X 10.9 brings such headers




#5289545 OpenGL - Render Plane equation

Posted by haegarr on 01 May 2016 - 03:08 AM

I don't know how your framework linearizes matrices, so this code snippet
   Matrix4 rot = new Matrix4(...) 

may or may not be correct.


But there is a noticeable difference in transformation computation between your 1st and your 2nd post. In the 1st post you compute

    Mi+1 := Mi * T * R

with M being the transformation matrix on the stack, T being the translation and R the rotation. The snippet in your 2nd post computes

    Mi+1 := I * R * T

instead, where I is the identity matrix. Notice that (a) the matrix being formerly on the top of the stack is now ignored, and that (b) the order of translation and rotation is exchanged.


Hence try to change this line

   Matrix4 result = rot * trans;


   Matrix4 result = trans * rot;

Further, if the former stack matrix should play a role, instead of 

#5289527 OpenGL - Render Plane equation

Posted by haegarr on 01 May 2016 - 01:18 AM

Your doubts are valid, but the reason is wrong. The vertices are placed onto the local x/y plane, hence the normal is along the direction of the local z axis. Putting (0,0,1) into your rotation code and considering that rotation by 360° is identical to rotation by 0°, you correctly see the front of your plane. However, just using normals along x (1,0,0) or y (0,1,0) or so with your rotation code still give you the same vertices, since all transformations are the same. Changing the order of rotation does not make any difference then.


Actually, using a normal and a distance lets several degrees of freedom in an open state. Rendering a plane with vertices need those degree of freedom to be fixed. What I mean is that you need a definition of

a) an origin of the plane, so that there is a point with local co-ordinates (0,0,0), 

b) an orientation angle around the normal (think of the rolling angle of a camera; the normal gives you just 2 of the 3 angles).


You solved a) by using the point

   origin := (0,0,0) + distance * normal


You solve b) by just picking an angle, i.e. so that rolling is zero.


So with these fixes you have a position and an orientation as a direction and "no rolling". Let's express "no rolling" as

   up_vector := (0,1,0)
And instead of a pure direction, let's use it as difference vector from the origin to a tip point

   center_point := origin + normal * 1


Now (with the origin and the center point and the up vector), what we have here are the ingredients for the gluLookAt function! Although being described as a function to compute a view matrix, the function actually just builds a transformation matrix with a translation and rotations so that the z axis points in a specific direction.


The math behind the gluLookAt function isn't complicated and leaving OpenGL's matrix stack behind is ever a Good Thing TM, but I assume you may be happy with the sketched solution. Otherwise feel free to ask :)

#5288428 Engine Subsystems calling each other

Posted by haegarr on 24 April 2016 - 06:28 AM

The inconvenience of passing a pointer around is not a sufficient reasoning for tolerating the evilness of globals or singletons!


It is common that sub-systems depend on other sub-systems. These dependencies should be directed and acyclic, i.e. a lower level sub-system should never ever call a higher level sub-system. Moreover, a sub-system is not necessarily represented by a single class; from a naming point of view, a "system" is always understood as elements that work together to collectively fulfill a specific task. Graphical rendering, for example, is often implemented as a process passing through several layers and done in several steps: Iterating the scene and collecting renderable entities (perhaps restricted to a specific kind), culling them by visibility, converting them from high level "models" to meshes and textures and shaders, sorting them by some performance criteria, and then sending them to the wrapper behind which the 3rd party graphics API is hidden.


I understood the Map class that is mentioned in several posts of the OP as a game level. As such it is a high level data structure (not a sub-system at all, of course). The wrapper for graphical rendering API is a lowest level thing. Hence giving the wrapper a Map instance is already a design flaw.


In the end, when a well defined structure of sub-systems is instantiated, pointers need to be passed around for sure. After instantiation, the sub-systems know where their serving sub-systems are, and the need for passing pointers exists just for runtime varying things.


Just my 2 cents, of course.

#5288395 Help with python :(

Posted by haegarr on 23 April 2016 - 11:49 PM

A key in an associative array need to be defined. In your case the key is given by a variable which itself is not defined.


a) The key defined by a string literal:

John_Smith = { "name": "John Smith" }

b) The key defined by a string variable: 
name = "name"
John_Smith = { name: "John Smith" }


#5286331 how to find x,y,z of a point in a sphere if I'm in the center of a sphere?

Posted by haegarr on 11 April 2016 - 11:44 AM

Very true. What I'm not getting is how the angles multiplication does its magic.

Well, there is no angle multiplication. The sine and cosine functions compute the lengths of a unit-vector as projection onto cartesian axes, where the vector is rotated by the given angle. The multiplication is done because we have 3 dimensions. Again, this is what makes spherical co-ordinates. For a circle we need an angle (and a radius), and for a sphere we need 2 angles (and a radius).

#5286315 how to find x,y,z of a point in a sphere if I'm in the center of a sphere?

Posted by haegarr on 11 April 2016 - 10:18 AM

In fact, the calculation of glm::vec3 direction is just an incarnation of standard conversion from so-called spherical co-ordinates into cartesian co-ordinates. The spherical co-ordinates are given by the 2 angles and the radius being 1, so that you have a point on the surface of the unit sphere. The "direction" is then the vector from the center of the sphere to the point on the surface, but now expressed as (x,y,z) length triple.


The actual magic, so to say, happens in fact when calculating the both angles from the mouse position; but that stuff isn't shown in the OP.

#5283606 Location tracking across scenes...

Posted by haegarr on 26 March 2016 - 02:20 PM

I once had that problem, too. I solved it the following way (described here using matrix transformations for column vectors):


An exit is a collision area with an own placement CX. It is linked to an entree which also has an own placement CE. When an actor enters the exit area, then the actor's global placement Ag is transformed into the exit's local coordinate system as


   AL := CX-1 * Ag


This placement, although being computed local to the exit, is directly re-interpreted as placement local to the entree, i.e.


   AL = CE-1 * Ag'


resulting in the placement behind the entree being


   Ag= CE * CX-1 * Ag


Here the placement C of an exit or entree could be modeled using translation, rotation, and scaling as usual


   C := T * R * S


so that different scales, orientations, and positions can be considered when desired.

#5273330 Order of transformation confuse

Posted by haegarr on 30 January 2016 - 03:36 AM

The matrix product is associative. That means you can put (decently) pairs of parentheses around the individual term at your desire. Now, when you choose parentheses so that you start with the product including the vector, e.g. like so for column vectors (i.e. the vector is on the right and the matrix on the left, typical for OpenGL)

    M1 * M2 * ( M3 * v )

and continue to the outer terms, like so

   = M1 * ( M2 * ( M3 * v ) )

then the parentheses tell you about the locality of the transformation. The wording for this is often "first apply M3, then M2, and then M1", although that wording is imprecise.
Notice that this works for row vectors as well
   ( ( v' * M3' ) * M2' )* M1'
So, if you want to apply scaling to the model, rotate the scaled model, and translate the rotated/scaled model,** and you use column vectors, then the formula with parentheses looks like
    T * ( R * ( S * v ) )
what, obviously, corresponds to your 2nd variant of code.
**Notice please that I've chosen a wording here that is not as blurry as "first … then …" ;)

Opengl reads in reverse order ...

As said above, "order" without further context is misleading here. For example, you can calculate

    ( ( M1 * M2 ) * M3 ) * v

as well and get the same result.

#5268824 Vector4 W Component

Posted by haegarr on 02 January 2016 - 08:02 AM

If the W is 1, the vector is treated as a point when being multiplied by a matrix. That is, it will be translated, rotated and scaled!

I'm used to the terms "position vector" (if w!=0) and "direction vector" or "difference vector" (if w==0).


Notice that w can be anything, and that any vector with w!=0 denotes a position.


I assume not length.... It would be odd if W took part in the Length operation as the vector (2, 2, 2, 0) and the point (2, 2, 2, 1) would have different results.

This is because a position has no length. What you want instead is the distance of the position from a reference point, e.g. (0,0,0,1), hence computing

    | (2,2,2,1) - ( 0,0,0,1 ) | = | (2,2,2,0) |


You should make a distinction between positions and directions based on axioms like these:

  a) position + direction =: position

  b) hence position - position =: direction (or difference)

  c) position + position =: positionunnormalized

  d) direction + direction = direction


So, should i just ignore W for these operations: Addition, Subtraction, Scalar Multiplication, Dot Product, Cross Product, Length and Projection?

Of course not! Certain operations like the cross-product do not make much sense in 4D. The length is something meaningful for direction vectors only (see above).


But then what happens when a Vector with a W of 0 and a Vector with a W of 1 are added? Point + Vector = Point makes sense in my head. But Vector + Point = ? That doesn't really make much sense...

Why should "Point + Vector" be different from "Vector + Point"? Vector addition is commutative.


Does this make sense to anyone else? How do you handle the W component of a Vector4?

A Vector4 has the explicit distinction of position vs. direction semantics. This absolutely makes sense. 

#5268564 Xcode and OpenGL shaders

Posted by haegarr on 31 December 2015 - 02:15 AM

You have to find out how the compiled program accesses the files. Where does it look for them? Are there absolute paths in use, or paths relative to the application, the install directory, the user's home directory, or whatever? Only if that is identified, you can work on a strategy to install the files where they are expected.

#5268442 How would you create a Weapons class?

Posted by haegarr on 30 December 2015 - 04:07 AM

It depends.

^^ This ;) Letting external forces (deadlines, budget, objectives, engine requirements, …) aside, then of course data driven composed game objects is the way to go.


I would even say that it isn't necessary to have a class "Weapon". Isn't damage just an effect on a specific stat, induced by using an item in a specific way? Let's say the engine supports an StatActionEffect component class (perhaps inheriting ActionEffect). The designer adds an instance of StatActionEffect to a game object that represents an item. The instance is parametrized with e.g. intensity=-40, target=collider, stat=health. Let's further say that the player's game object is enabled to use item game objects (here isn't the place to elaborate on this, but you can ask if you want to hear more about this). When the item is successfully used during gameplay, then the existence of an ActionEffect component causes its application, and voila: a damage is applied. (Of course, when implementing such a thing, it is not really such easy.) Then anything, even a thrown vase, may be a "weapon".


Notice that, with the same reasoning, a class "Item" is also not strictly necessary. Thing is, one must stop thinking that OOP classes cardinally represent physical objects.

#5268176 AI for an active time based combat system

Posted by haegarr on 28 December 2015 - 02:49 AM

The various AI techniques have their pros and cons. Your question shows that your current problem is the reasoning, i.e. the selection of the best option under consideration of various factors.


The strength of both FSMs and BTs is IMO not reasoning. FSMs are less prominent today because they become unmaintainable quickly when the number of states and/or transition grow. However, reasoning in FSMs is implemented in the selection of one of the available transitions. Transitions are labeled with the condition that need to be met, and the conditions result in a boolean value. Strictly seen there must be exactly one transition showing a true. That makes the formulation of conditions for you very hard. Selection in a BT has many flavors. None of the standard ones solve your problem. BT selectors are IMO mainly good to control behavioral proceedings.


Now, another AI tool is utility. Utility based AI is good at reasoning like you want, because

* it is able to consider any amount and kind of factors for each option,

* looks at all options in parallel when deciding which one to select,

* is able to give multiple results in case that an option can e.g. be applied to multiple targets.


My favorite solution is a tree of selectors and actions, where each selector can be a utility-based reasoner, a BT node, a planner, or even a FSM, whatever matches the need at that specific level of AI. Let's say a top level utility-based reasoner has selected an option due to its best fitting utility value. The action sequence of the option is hence processed. If the behavior of the selected option can be modeled as sequence than all is fine. If not, then place e.g. a BT selector node (or even sub-tree) therein. Or, if the action need to be planned due to runtime circumstances, then place a planner therein.


However, implementing all this in entirety is much stuff, perhaps too much for your game. I recommend to start with a tree structure in mind and to add kinds of selectors and actions on need. A utility-based reasoner as first selector kind is obviously fine. Supporters for utility-based AI are around here, probably most notably our Dave himself. So don't hesitate to ask if you have more questions.

#5267800 Replacing ECS with component based architecture, or traditional OOP, or somet...

Posted by haegarr on 24 December 2015 - 10:58 AM


There are many other similar examples, making something that jumps, something that flies, something that swims, but then nightmares occur when you need to make JumpingAndFlying, JumpingAndSwimming, FlyingAndSwimming, JumpingAndFlyingAndSimming ... Don't go there.

The rookie might consider using multiple inheritance here. My advice would be to steer clear of that train wreck too smile.png


Or maybe recognizing that so much inheritance is bad to maintain and then ending with a god class, what is also the wrong way.

#5267762 Bvh player - blank!

Posted by haegarr on 24 December 2015 - 03:08 AM

Does anyone know what this part does?

That code computes the orientation matrix from a forward direction "dir" and a principal upward direction "up": Given 2 (normalized) vectors "up" and "dir", the vector "side" is computed as the cross-product of both and then normalized. Then "up" is recomputed, again using the cross-product. In the end m[16] is an orientation matrix with its z vector pointing along the direction of "dir", and its y vector pointing roughly in the direction of "up".


side = up x dir

length = length( side )

if length is very short then side = vector( 1, 0, 0 ), length = 1

side = side / length  # normalization

up = dir x side

m = matrix( side, up, dir )


 It may be used for the orthonormalization of an orientation matrix. It may be part of a "look at" function.