Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Oct 2005
Offline Last Active Today, 01:28 AM

#5314389 Getting gameObject components to process in systems...

Posted by on 09 October 2016 - 03:31 AM

It is okay to pass a pointer to the pool of game objects (here given by the GameObjectManager). At some point there must be an interrelation between systems. And it is also okay to name that class GameObjectManager when it actually manages game objects.



However, I see other problems.


a) When each system fetches components at update time, it has no chance of managing belonging components itself in a way that is meaningful and efficient in the context of the respective sub-system, because the overall architecture prescribes how the components are stored.


b) The fact that a game object has a TransformComponent instance means that it has a spatial placement in the scene. It does not necessarily mean that it is able to move. Using a MovementSystem is one of several possibilities to alter the content of a TransformComponent. So the MovementSystem should not act on a game object just because it has a TransformComponent. (I personally use the class name "PlacementComponent" to make this clear.)


c) The player's game object is something special, of course. However, its movement is usually not, so that MovementSystem should not make a difference between the player's game object and an NPC game object.



To overcome these problems, think e.g. of the following approach:


A) There is a pool of game objects that should be added to the scene at the next opportunity. There is also a pool of game objects that should be removed from the scene at the next opportunity. It can be understood as a service of the game object management. Potentially any system located anywhere in the game loop can cause game object creation by adding a kind of job description "add game object of class X" to the belonging pool (analog for game removal). The said "opportunity to add/remove game objects" to the scene is somewhere at the very beginning of the game loop, at least before any scene related sub-system has done its update. It is nothing than the update() of the game management service. At this point the service will notify all linked sub-systems, and each of them can investigate the pools and manage their internal operational resources accordingly. So each sub-system can drive its own management structure as needed, e.g. holding pointers to all components of type Y and ignoring any game object that doesn't provide Y.


B) Sub-systems need not be all at the same level. It is more like a layered architecture, so that sub-systems on a higher layer may utilize sub-systems on a lower layer. So a MovementSystem is on a more lower layer. For example, a character is mainly controlled by the fact that a PlayerController component or AiAgentComponent is added. The sub-systems running such components may then instruct the MovementSystem when and how to handle the belonging TransformComponent data.

#5310696 Is this practical for a resource manager?

Posted by on 14 September 2016 - 01:06 AM

Just for information: The "generally recommended" implementation of resource management, as crystallized from the many discussions here on GDnet, coarsely looks as follows.  (We speak of assets here, not of operational resources, don't we?)


a) There is an own manager for each kind of resource.


b) Management is not done monolithic in a single object. Instead, the ResourceManager is a front-end that co-ordinates a couple of collaborators. Have a look at the facade software pattern for an introduction of this scheme.


c) One collaborator behind the facade is responsible for caching. It is e.g. derived from a HashMap, using the resource identifier as key and the resource as value. Because it is an own object, storing the resources can be done in a resource type specific way, giving the ability for optimizations.


d) Another collaborator is the resource loader. Its job is to read resources from mass storage when being instructed to do so.


e) The manager is *ever* the owner of a loaded resource. A client that wants to use a resource *ever* requests the belonging manager for the resource, and *never* frees it after getting access.


f) If the manager is requested for a resource, it looks up the resource in the cache and returns it if found; otherwise it calls the loader to load the resource from mass storage, puts the result into the cache and returns it.


g) It may also be favorable to have a resource factory per resource type as a collaborator. If needed by the game, the manager can provide public access to a factory so that clients can generate resources on the fly, e.g. for dynamic content creation. In this case I'd suggest factory functions within the manager's interface.


h) Do *not* use static objects as resource manager. The managers should be accessible from what Sean Middleditch calls "spaces", referred to from game levels. This allows resource groups to be separated and freed in blocks / at once.


i) Resources need not necessarily be derived from a common base class, but I also don't see it as to be avoided obsessively.

#5293517 QuadTree: design and implementation question, (n+1)th topic

Posted by on 26 May 2016 - 01:12 AM

What's about the 1:1 relation of components and quadtrees? I mean to create a separate quad tree for each sortable component? This solution has the highest memory cost but probably the fastest search. But the extra memory consuption is not a big deal IMHO, it's just a couple extra MB at most.


A QuadTree is a storage structure with attention to a partial neighborhood relation. By itself it is totally independent on what kind of objects are stored. Any higher level of abstraction can use (its own instance of) QuadTree to manage its belonging objects. For example, a sub-system that manages the Listener (meaning a simulated hearing) component can check for spatial vicinity by utilizing a QuadTree; the Collision sub-system can manage a set of colliders by utilizing a QuadTree (where a collider itself refers to Collision component); the lighting sub-system manages lights in a QuadTree; and so on. However, in an implementation where separation of concerns are respected, a QuadTree is just an utility used behind the scenes, and there is an instance of QuadTree for each concern. A sub-system provides services on its interface, and whether it used a QuadTree or whatsoever is irrelevant from a clients point of view. Doing this allows to change internal implementations when necessary without impact on the clients.

#5291599 Determine what side of rectangle is hit by ball

Posted by on 14 May 2016 - 01:37 PM


How I wish I could understand it clearly as you. Im pretty bad at math especially reading a formula and then converting to code. Although I can understand basics of vectors and a little trig. I still coudnt read math signs and formula in your answer. If I can get where the exactly collision takes place and which side, then this will help me a lot. But sadly, its just too foreign for me to read this kind of thing. I really dont understand this following. Might help if you can explain this to me. TY


 pb(k) -- I call it Player Ball(Pb) but what is (K) doing there? is it to multiply or something?

:=  --  is this some condition on math?


[ -w/2  +h/2 ]T    --  what is the T here? Is this the same t? or is it different? and are you denoting this as an exponent? or does it meant something else?


k' --  this I sometimes see a lot the ' thing. What does it meant by this?



I notice about the radius. I guess, this is applied for Bounding-circle collision? I didnt go any complicated, I just use two rectangles(AABB).



pb(k) means a vector (the bold letter) p which denotes a position in space, with an identifying index b (set in subscripted style) denoting that the position is that of the ball, and an independent variable k (written in parentheses) denoting that the position is not constant but varies with k. Due to the conditional 0 <= k <= 1 the value of k is restricted to be between 0 and 1, inclusively. In summary: The position of the ball at moment k is … (whatever is written in the following).


pb(t0) is hence to be read as position of the ball at moment t0. Since we defined that t0 denotes the moment in time at the previous update of the game and t1 denotes the moment in time of the current update, the time t ticked from t0 to t1 in the meanwhile. The normalization formula shown in the former post makes it just a bit easier to deal with this, because when setting t = t0 the normalization gives a value of k = 0, and setting t = t1 gives a value of k = 1. So, while time t ticked from t0 to t1, the equivalent k ticked from 0 to 1. Hence: pb(k) with 0 <= k <= 1 denotes all the positions where the ball was in the time that has elapsed between the previous update and the current update. All those positions form a straight line segment. Of course, the ball was not really there because you have computed only pb(t0) and pb(t1), but the collision impact was somewhere in the interval between t0 and t1 and hence we need to look at all the inbetween positions.


pp(k) is to be read identically to pb(k) but with the identifying index p denoting that this is the varying position of the paddle.


The sign := means that we define the term on the side of the : by the formula on the side of the =, i.e. "a := b + c" is read "a is defined as b plus c". For all practical purposes, it can be used like an equation "a = b + c". It just expresses that another term (here "a") is introduced.


The notion [ a b ]T means a 2D vector with elements a and b, written as a row vector (elements side-by-side) and being transposed (denoted by the superscripted T). A transposed row vector results in a column vector, i.e. the elements are written one over the other. Since side-by-side writing is much simpler to be used with, well, typesetter text, using it with the transpose operator is just for convenience.


The apostrophe is used with many different meanings. In my post it just means "a specific value of k" (here k') to make a distinction to the "a variable value k" (just k).


Using a circle instead of an aligned box is often easier. In the given case, it is definitely easier, and it gives more accurate results at the paddle corners! I suggest you to use it.



Of course, feel free to ask if something isn't clear...

#5291508 Determine what side of rectangle is hit by ball

Posted by on 14 May 2016 - 02:58 AM

The underlying problem here is that a single state in time is used to determine which edge of the block was hit first by the ball. But "first" and "second" is an ordering criteria in time. The higher the relative speed (i.e. the delta movement distance since last collision determination) the less reliably can the penetration depth be used.


I don't know whether a trick exists to do the job in a clever way. In general, if you determined that the objects do collide now, you have to rollback time, so shift the objects back to their respective positions to where collision just began, and then check for the edge that was involved (considering the case that actually the corner was hit).


I assume the following situation: One object, the ball, can be described as a moving circle. Its position is given w.r.t. its center, and its radius is known. The other object, the paddle, can be described as a moving rectangular, axis aligned block. It position is given w.r.t. its center, and its width and height are known. During a single time step, both movements can be assumed to be linear (straight and not accelerated).


Let t1 denote the current time and t0 the time valid at the previous time step. Let pb(t1) and pb(t0) the corresponding positions of the ball. Let pp(t1) and pp(t0) the corresponding positions of the paddle. By time normalization


   k := ( t - t0 ) / ( t1 - t0 )  so that  0 <= k <= 1 within the last time step,


we can say


   pb(k) := pb(t0) + k * ( pb(t1) - pb(t0) )

   pp(k) := pp(t0) + k * ( pp(t1) - pp(t0) )



The task is now to compute that k where the first collision takes place. A little trick here is to separate both spatial axes. That reduces the problem, because now we have 3 collisions between a point and a line segment in 1D. To do this we first define the positions of the 4 corners of the block using its width w and height h as:


   ctl(k) := pp(k) + [ -w/2  +h/2 ]T

   ctr(k) := pp(k) + [ +w/2  +h/2 ]T

   cbl(k) := pp(k) + [ -w/2  -h/2 ]T

   cbr(k) := pp(k) + [ +w/2  -h/2 ]T


Now, for example let's say that from the above we calculate the left edge of the block as line segment in 2D


   el(k,f) := cbl(k) + f * ( ctl(k) - cbl(k) ) = cbl(k) + f * [ 0 h ]T  w/  0 <= f <= 1


so that its x component is just


   xl(k,f) = xbl(k)


For the same k, the x component of the point of the ball that collides first is the rightmost point on the circle, i.e.


   xcr(k) = xb(k) + R


where R denotes the radius. Now compute the k' by solving


   xcr(k') == xl(k',f)


and check for condition


   0 <= k' <= 1


If the condition is violated then the edge is not hit and hence not considered further. Otherwise compute f' by using k' from


   xl(k',f') = ...


and check the condition


   0 <= f' <= 1


If the condition is violated, then the left edge is not the one touched first and hence not considered further. Although … if you want to check for collision with the corners, you should use a less strict condition, i.e. with a small positive tolerance value d, something like


  0 - d <= f' <= 1 + d


Do the above analogously for the right and top edges, too. (I assume that the bottom edge plays no role.) Then the overall result is a set of 0, 1, 2, or 3 values of k'. If the set is empty then there is no collision. Otherwise the source of the minimal k' in the set tells you which edge was hit first. If the minimal k' and the next greater k' are very close (compared to a defined limit), then the ball should be considered to hit those corner of the paddle to which the source edges of both k' belong to.



Oh dear! I should stop writing such long posts … well, hopefully it helps to at least understand the problem behind the problem ;) Notice that the above resulted from mental work, so check twice for correctness if you actually want to use it.

#5290609 [UNITY] Enemy Movement

Posted by on 08 May 2016 - 03:04 AM

It seems me that you have a wrong understanding of how updating is intended to work. The MonoBehavior.Update() method is invoked periodically, i.e. once per cycle of the game loop. Due to technical reasons, the time from one invocation to the next will not ever be exactly the same. Therefore the Time class and especially Time.deltaTime can be used. The MonoBehavior.Update() should update the respective object so that its state matches with the current moment in time. I.e. a snapshot is to be computed therein. Since all active objects are updated, each update should happen as fast as possible.


That has at least 2 consequences:


(a) The while-loop inside MonoBehavior.Update() is problematic as several posters have already mentioned. If your game runs with approximately 60 fps, i.e. there should be 60 updates per second, and your routine defers that by 59/60 seconds per run, the game will all but not responsive.


(b) Since MonoBehavior.Update() is already invoked periodically, it should not be necessary to run a co-routine in parallel, at least not for the task you're trying to solve.


The intended way of doing an update is more or less like so: When entering the method, determine the behavior that is currently set and hence was active during the last Time.deltaTime seconds. Manipulate the object accordingly in all belongings that are not updated automatically. Then determine based on input and game situation which behavior is to be set for the pending time delta, and store that for the next update.

#5290297 How do I design this kind of feature?

Posted by on 05 May 2016 - 01:32 PM

Edit: Haegarr, You've over-complicated the system purpose. Don't try to perdict the future or your needs,

Write extensible code and you won't be stuck 10 minutes on question of "why", "how", "When", "Why" again... 

+ Managing "world-state" from bunch of services is not a good idea. Fix me if I got you wrong. 

I'm not sure what your critics are about. Do you mean thinking 10 minutes on a problem is already a waste of time? Really? In my experience, planning what to do is far away from being a waste of time, especially when dealing with a complexity like this. The aspects I mentioned are not coming from a look into a glass ball but from explicit needs. Effects do not happen by themselves. They need to be triggered. They need to be selected first (or, to be exact, an action need to be selected first); action selection mechanism is a well known term. They need to be controlled. All these things work together. Looking just on a single aspect isolated from its surroundings will cause problems. Moreover, decoupling systems is a real world requirement, too. If you ignore needs, in the end you will waste more time on refactoring than 10 minutes, for sure.


Then, why exactly is managing world state from a couple of services not a good idea? For example, all component based entity solutions with sub-systems do it. It allows for a higher performance due to better cache coherency. It allows for a clear distinction of stages when running the game loop (i.e. what happens when in which order). When interested in an explanation with greater depth, have a look at Game Engine Architecture by Jason Gregory.


No offense. I already mentioned that many ways can be gone. The way I'm going is not my first trial. I already learned it the hard way, so to say.

#5290214 How do I design this kind of feature?

Posted by on 05 May 2016 - 03:18 AM

First of, there a many possible ways to solve such a problem. Their respective suitability depends on the game mechanics, player control and AI, need for extensibility, overall structure of the game engine, and possibly more. In short, working out a solution without knowing all aspects may give a sub-optimal result.


Nevertheless, coming to your question about "services": This is in fact a totally fine approach! You should just think of generalizing some stuff shown in your example above. Think of a service (in my implementation is this one called SpatialServices) that can be queried for the current placement (i.e. position and orientation) of any game object, queried for a list of game objects that are in a given vicinity range, colliding with a given volume, and so on, optionally done with a filter on the game object's type. Such a functionality is not special to ItemEffects; it is commonly useable. Notice that this kind of approach is one incarnation of the widely used component based entities architectures.


Let's come back to the generalization. A key can in principle be used by the player or any agent of AI. So let's generalize the "player" parameter to be an "actor" parameter and being of the Agent class type. I prefer to use this term here because I think "that agent that performs the action that has this effect is the actor". Next, using a key is not restricted to a door. Let's say that your game object can implement the Lockable interface, and a key is defined to be a game object that can be used to lock and unlock such a Lockable object, or, in other words, the actions "lock object" and "unlock object" can be applied to a Lockable type of object. Next, passing the Level as argument is fine if it is understood mainly as the current collection of services.


For me, the above formulation leads to the question "why does ItemEffect::use(…) check for applicability?" The anatomy of an action shows IMHO at least the distinct parts pre-condition and outcome. For a more elaborate system, more parts are needed, of course. Let's say we have a class Action with the methods Action::checkApplicability(…) and Action::perform(…). So the player controller and AI agent can invoke checkApplicability to determine whether to take that action into account. E.g. the player controller may adapt its UI to reflect that that action is not available, so that the player just has no ability to trigger e.g. "unlock object" if s/he hasn't a key or is too far away. Notice please that we have a game mechanics related option here: Do we allow the agent to try to use an obviously senseless action and then react with "haha, stupid one, you are too far away", or else do we prevent this kind of situation? (You remember, I already wrote earlier that such things have an influence.) If checkApplicability allows the action and the action selection mechanism has selected this action, then Action::perform(…) is invoked. Nothing hinders you to do some checks herein, too. This is nonetheless the place where the "try and possibly fail" style of mechanics can be implemented. However, in general the outcome of the action is coded in that routine.


The above is already some stuff to think about. However, we still have neglected some aspects. For example, an action may have an immediate or a deferred outcome. For example, the action "open object" applied to a chest should change the state from closed to opened (so the outcome is "is opened"), but the transition should be garnished with an animation. So the state changes from (stable) closed to (transient) opening to (again stable) opened. The state is controlled first by the action, then by the animation sub-system. Another aspect is the enhancement: What happens to "open object" if the chest has a lock, and … perhaps a trap?


From all this we can deduce what kind of system I have in mind: A separate world state, a couple of actions as the basis for agents to alter the world state, and the possibility for  literally any sub-systems and services to work with them. I'm currently on my way to implement such a system, so you should not be surprised that my explanations go in exactly that direction ;) But, maybe, those thoughts are of interest to you, too.

#5289569 <OpenGL/gl.h> Include Failing

Posted by on 01 May 2016 - 09:20 AM

It is not exactly clear to me what build system you are using. It is Xcode itself? If yes, then adding "/System/Library/Frameworks/" should not be needed.


However, the frameworks therein are runtime components. The headers are development components - the are not bundled with the runtime components. Instead, they are bundled with the SDKs. For example, the SDK for Mac OS X 10.9 brings such headers




#5289545 OpenGL - Render Plane equation

Posted by on 01 May 2016 - 03:08 AM

I don't know how your framework linearizes matrices, so this code snippet
   Matrix4 rot = new Matrix4(...) 

may or may not be correct.


But there is a noticeable difference in transformation computation between your 1st and your 2nd post. In the 1st post you compute

    Mi+1 := Mi * T * R

with M being the transformation matrix on the stack, T being the translation and R the rotation. The snippet in your 2nd post computes

    Mi+1 := I * R * T

instead, where I is the identity matrix. Notice that (a) the matrix being formerly on the top of the stack is now ignored, and that (b) the order of translation and rotation is exchanged.


Hence try to change this line

   Matrix4 result = rot * trans;


   Matrix4 result = trans * rot;

Further, if the former stack matrix should play a role, instead of 

#5289527 OpenGL - Render Plane equation

Posted by on 01 May 2016 - 01:18 AM

Your doubts are valid, but the reason is wrong. The vertices are placed onto the local x/y plane, hence the normal is along the direction of the local z axis. Putting (0,0,1) into your rotation code and considering that rotation by 360° is identical to rotation by 0°, you correctly see the front of your plane. However, just using normals along x (1,0,0) or y (0,1,0) or so with your rotation code still give you the same vertices, since all transformations are the same. Changing the order of rotation does not make any difference then.


Actually, using a normal and a distance lets several degrees of freedom in an open state. Rendering a plane with vertices need those degree of freedom to be fixed. What I mean is that you need a definition of

a) an origin of the plane, so that there is a point with local co-ordinates (0,0,0), 

b) an orientation angle around the normal (think of the rolling angle of a camera; the normal gives you just 2 of the 3 angles).


You solved a) by using the point

   origin := (0,0,0) + distance * normal


You solve b) by just picking an angle, i.e. so that rolling is zero.


So with these fixes you have a position and an orientation as a direction and "no rolling". Let's express "no rolling" as

   up_vector := (0,1,0)
And instead of a pure direction, let's use it as difference vector from the origin to a tip point

   center_point := origin + normal * 1


Now (with the origin and the center point and the up vector), what we have here are the ingredients for the gluLookAt function! Although being described as a function to compute a view matrix, the function actually just builds a transformation matrix with a translation and rotations so that the z axis points in a specific direction.


The math behind the gluLookAt function isn't complicated and leaving OpenGL's matrix stack behind is ever a Good Thing TM, but I assume you may be happy with the sketched solution. Otherwise feel free to ask :)

#5288428 Engine Subsystems calling each other

Posted by on 24 April 2016 - 06:28 AM

The inconvenience of passing a pointer around is not a sufficient reasoning for tolerating the evilness of globals or singletons!


It is common that sub-systems depend on other sub-systems. These dependencies should be directed and acyclic, i.e. a lower level sub-system should never ever call a higher level sub-system. Moreover, a sub-system is not necessarily represented by a single class; from a naming point of view, a "system" is always understood as elements that work together to collectively fulfill a specific task. Graphical rendering, for example, is often implemented as a process passing through several layers and done in several steps: Iterating the scene and collecting renderable entities (perhaps restricted to a specific kind), culling them by visibility, converting them from high level "models" to meshes and textures and shaders, sorting them by some performance criteria, and then sending them to the wrapper behind which the 3rd party graphics API is hidden.


I understood the Map class that is mentioned in several posts of the OP as a game level. As such it is a high level data structure (not a sub-system at all, of course). The wrapper for graphical rendering API is a lowest level thing. Hence giving the wrapper a Map instance is already a design flaw.


In the end, when a well defined structure of sub-systems is instantiated, pointers need to be passed around for sure. After instantiation, the sub-systems know where their serving sub-systems are, and the need for passing pointers exists just for runtime varying things.


Just my 2 cents, of course.

#5288395 Help with python :(

Posted by on 23 April 2016 - 11:49 PM

A key in an associative array need to be defined. In your case the key is given by a variable which itself is not defined.


a) The key defined by a string literal:

John_Smith = { "name": "John Smith" }

b) The key defined by a string variable: 
name = "name"
John_Smith = { name: "John Smith" }


#5286331 how to find x,y,z of a point in a sphere if I'm in the center of a sphere?

Posted by on 11 April 2016 - 11:44 AM

Very true. What I'm not getting is how the angles multiplication does its magic.

Well, there is no angle multiplication. The sine and cosine functions compute the lengths of a unit-vector as projection onto cartesian axes, where the vector is rotated by the given angle. The multiplication is done because we have 3 dimensions. Again, this is what makes spherical co-ordinates. For a circle we need an angle (and a radius), and for a sphere we need 2 angles (and a radius).

#5286315 how to find x,y,z of a point in a sphere if I'm in the center of a sphere?

Posted by on 11 April 2016 - 10:18 AM

In fact, the calculation of glm::vec3 direction is just an incarnation of standard conversion from so-called spherical co-ordinates into cartesian co-ordinates. The spherical co-ordinates are given by the 2 angles and the radius being 1, so that you have a point on the surface of the unit sphere. The "direction" is then the vector from the center of the sphere to the point on the surface, but now expressed as (x,y,z) length triple.


The actual magic, so to say, happens in fact when calculating the both angles from the mouse position; but that stuff isn't shown in the OP.