Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 09:25 AM

#5293517 QuadTree: design and implementation question, (n+1)th topic

Posted by haegarr on 26 May 2016 - 01:12 AM

What's about the 1:1 relation of components and quadtrees? I mean to create a separate quad tree for each sortable component? This solution has the highest memory cost but probably the fastest search. But the extra memory consuption is not a big deal IMHO, it's just a couple extra MB at most.

 

A QuadTree is a storage structure with attention to a partial neighborhood relation. By itself it is totally independent on what kind of objects are stored. Any higher level of abstraction can use (its own instance of) QuadTree to manage its belonging objects. For example, a sub-system that manages the Listener (meaning a simulated hearing) component can check for spatial vicinity by utilizing a QuadTree; the Collision sub-system can manage a set of colliders by utilizing a QuadTree (where a collider itself refers to Collision component); the lighting sub-system manages lights in a QuadTree; and so on. However, in an implementation where separation of concerns are respected, a QuadTree is just an utility used behind the scenes, and there is an instance of QuadTree for each concern. A sub-system provides services on its interface, and whether it used a QuadTree or whatsoever is irrelevant from a clients point of view. Doing this allows to change internal implementations when necessary without impact on the clients.




#5291599 Determine what side of rectangle is hit by ball

Posted by haegarr on 14 May 2016 - 01:37 PM

 

How I wish I could understand it clearly as you. Im pretty bad at math especially reading a formula and then converting to code. Although I can understand basics of vectors and a little trig. I still coudnt read math signs and formula in your answer. If I can get where the exactly collision takes place and which side, then this will help me a lot. But sadly, its just too foreign for me to read this kind of thing. I really dont understand this following. Might help if you can explain this to me. TY

 

 pb(k) -- I call it Player Ball(Pb) but what is (K) doing there? is it to multiply or something?

:=  --  is this some condition on math?

 

[ -w/2  +h/2 ]T    --  what is the T here? Is this the same t? or is it different? and are you denoting this as an exponent? or does it meant something else?

 

k' --  this I sometimes see a lot the ' thing. What does it meant by this?

 

 

I notice about the radius. I guess, this is applied for Bounding-circle collision? I didnt go any complicated, I just use two rectangles(AABB).

 

 

pb(k) means a vector (the bold letter) p which denotes a position in space, with an identifying index b (set in subscripted style) denoting that the position is that of the ball, and an independent variable k (written in parentheses) denoting that the position is not constant but varies with k. Due to the conditional 0 <= k <= 1 the value of k is restricted to be between 0 and 1, inclusively. In summary: The position of the ball at moment k is … (whatever is written in the following).

 

pb(t0) is hence to be read as position of the ball at moment t0. Since we defined that t0 denotes the moment in time at the previous update of the game and t1 denotes the moment in time of the current update, the time t ticked from t0 to t1 in the meanwhile. The normalization formula shown in the former post makes it just a bit easier to deal with this, because when setting t = t0 the normalization gives a value of k = 0, and setting t = t1 gives a value of k = 1. So, while time t ticked from t0 to t1, the equivalent k ticked from 0 to 1. Hence: pb(k) with 0 <= k <= 1 denotes all the positions where the ball was in the time that has elapsed between the previous update and the current update. All those positions form a straight line segment. Of course, the ball was not really there because you have computed only pb(t0) and pb(t1), but the collision impact was somewhere in the interval between t0 and t1 and hence we need to look at all the inbetween positions.

 

pp(k) is to be read identically to pb(k) but with the identifying index p denoting that this is the varying position of the paddle.

 

The sign := means that we define the term on the side of the : by the formula on the side of the =, i.e. "a := b + c" is read "a is defined as b plus c". For all practical purposes, it can be used like an equation "a = b + c". It just expresses that another term (here "a") is introduced.

 

The notion [ a b ]T means a 2D vector with elements a and b, written as a row vector (elements side-by-side) and being transposed (denoted by the superscripted T). A transposed row vector results in a column vector, i.e. the elements are written one over the other. Since side-by-side writing is much simpler to be used with, well, typesetter text, using it with the transpose operator is just for convenience.

 

The apostrophe is used with many different meanings. In my post it just means "a specific value of k" (here k') to make a distinction to the "a variable value k" (just k).

 

Using a circle instead of an aligned box is often easier. In the given case, it is definitely easier, and it gives more accurate results at the paddle corners! I suggest you to use it.

 

 

Of course, feel free to ask if something isn't clear...




#5291508 Determine what side of rectangle is hit by ball

Posted by haegarr on 14 May 2016 - 02:58 AM

The underlying problem here is that a single state in time is used to determine which edge of the block was hit first by the ball. But "first" and "second" is an ordering criteria in time. The higher the relative speed (i.e. the delta movement distance since last collision determination) the less reliably can the penetration depth be used.

 

I don't know whether a trick exists to do the job in a clever way. In general, if you determined that the objects do collide now, you have to rollback time, so shift the objects back to their respective positions to where collision just began, and then check for the edge that was involved (considering the case that actually the corner was hit).

 

I assume the following situation: One object, the ball, can be described as a moving circle. Its position is given w.r.t. its center, and its radius is known. The other object, the paddle, can be described as a moving rectangular, axis aligned block. It position is given w.r.t. its center, and its width and height are known. During a single time step, both movements can be assumed to be linear (straight and not accelerated).

 

Let t1 denote the current time and t0 the time valid at the previous time step. Let pb(t1) and pb(t0) the corresponding positions of the ball. Let pp(t1) and pp(t0) the corresponding positions of the paddle. By time normalization

 

   k := ( t - t0 ) / ( t1 - t0 )  so that  0 <= k <= 1 within the last time step,

 

we can say

 

   pb(k) := pb(t0) + k * ( pb(t1) - pb(t0) )

   pp(k) := pp(t0) + k * ( pp(t1) - pp(t0) )

 

 

The task is now to compute that k where the first collision takes place. A little trick here is to separate both spatial axes. That reduces the problem, because now we have 3 collisions between a point and a line segment in 1D. To do this we first define the positions of the 4 corners of the block using its width w and height h as:

 

   ctl(k) := pp(k) + [ -w/2  +h/2 ]T

   ctr(k) := pp(k) + [ +w/2  +h/2 ]T

   cbl(k) := pp(k) + [ -w/2  -h/2 ]T

   cbr(k) := pp(k) + [ +w/2  -h/2 ]T

 

Now, for example let's say that from the above we calculate the left edge of the block as line segment in 2D

 

   el(k,f) := cbl(k) + f * ( ctl(k) - cbl(k) ) = cbl(k) + f * [ 0 h ]T  w/  0 <= f <= 1

 

so that its x component is just

 

   xl(k,f) = xbl(k)

 

For the same k, the x component of the point of the ball that collides first is the rightmost point on the circle, i.e.

 

   xcr(k) = xb(k) + R

 

where R denotes the radius. Now compute the k' by solving

 

   xcr(k') == xl(k',f)

 

and check for condition

 

   0 <= k' <= 1

 

If the condition is violated then the edge is not hit and hence not considered further. Otherwise compute f' by using k' from

 

   xl(k',f') = ...

 

and check the condition

 

   0 <= f' <= 1

 

If the condition is violated, then the left edge is not the one touched first and hence not considered further. Although … if you want to check for collision with the corners, you should use a less strict condition, i.e. with a small positive tolerance value d, something like

 

  0 - d <= f' <= 1 + d

 

Do the above analogously for the right and top edges, too. (I assume that the bottom edge plays no role.) Then the overall result is a set of 0, 1, 2, or 3 values of k'. If the set is empty then there is no collision. Otherwise the source of the minimal k' in the set tells you which edge was hit first. If the minimal k' and the next greater k' are very close (compared to a defined limit), then the ball should be considered to hit those corner of the paddle to which the source edges of both k' belong to.

 

 

Oh dear! I should stop writing such long posts … well, hopefully it helps to at least understand the problem behind the problem ;) Notice that the above resulted from mental work, so check twice for correctness if you actually want to use it.




#5290609 [UNITY] Enemy Movement

Posted by haegarr on 08 May 2016 - 03:04 AM

It seems me that you have a wrong understanding of how updating is intended to work. The MonoBehavior.Update() method is invoked periodically, i.e. once per cycle of the game loop. Due to technical reasons, the time from one invocation to the next will not ever be exactly the same. Therefore the Time class and especially Time.deltaTime can be used. The MonoBehavior.Update() should update the respective object so that its state matches with the current moment in time. I.e. a snapshot is to be computed therein. Since all active objects are updated, each update should happen as fast as possible.

 

That has at least 2 consequences:

 

(a) The while-loop inside MonoBehavior.Update() is problematic as several posters have already mentioned. If your game runs with approximately 60 fps, i.e. there should be 60 updates per second, and your routine defers that by 59/60 seconds per run, the game will all but not responsive.

 

(b) Since MonoBehavior.Update() is already invoked periodically, it should not be necessary to run a co-routine in parallel, at least not for the task you're trying to solve.

 

The intended way of doing an update is more or less like so: When entering the method, determine the behavior that is currently set and hence was active during the last Time.deltaTime seconds. Manipulate the object accordingly in all belongings that are not updated automatically. Then determine based on input and game situation which behavior is to be set for the pending time delta, and store that for the next update.




#5290297 How do I design this kind of feature?

Posted by haegarr on 05 May 2016 - 01:32 PM

Edit: Haegarr, You've over-complicated the system purpose. Don't try to perdict the future or your needs,

Write extensible code and you won't be stuck 10 minutes on question of "why", "how", "When", "Why" again... 

+ Managing "world-state" from bunch of services is not a good idea. Fix me if I got you wrong. 

I'm not sure what your critics are about. Do you mean thinking 10 minutes on a problem is already a waste of time? Really? In my experience, planning what to do is far away from being a waste of time, especially when dealing with a complexity like this. The aspects I mentioned are not coming from a look into a glass ball but from explicit needs. Effects do not happen by themselves. They need to be triggered. They need to be selected first (or, to be exact, an action need to be selected first); action selection mechanism is a well known term. They need to be controlled. All these things work together. Looking just on a single aspect isolated from its surroundings will cause problems. Moreover, decoupling systems is a real world requirement, too. If you ignore needs, in the end you will waste more time on refactoring than 10 minutes, for sure.

 

Then, why exactly is managing world state from a couple of services not a good idea? For example, all component based entity solutions with sub-systems do it. It allows for a higher performance due to better cache coherency. It allows for a clear distinction of stages when running the game loop (i.e. what happens when in which order). When interested in an explanation with greater depth, have a look at Game Engine Architecture by Jason Gregory.

 

No offense. I already mentioned that many ways can be gone. The way I'm going is not my first trial. I already learned it the hard way, so to say.




#5290214 How do I design this kind of feature?

Posted by haegarr on 05 May 2016 - 03:18 AM

First of, there a many possible ways to solve such a problem. Their respective suitability depends on the game mechanics, player control and AI, need for extensibility, overall structure of the game engine, and possibly more. In short, working out a solution without knowing all aspects may give a sub-optimal result.

 

Nevertheless, coming to your question about "services": This is in fact a totally fine approach! You should just think of generalizing some stuff shown in your example above. Think of a service (in my implementation is this one called SpatialServices) that can be queried for the current placement (i.e. position and orientation) of any game object, queried for a list of game objects that are in a given vicinity range, colliding with a given volume, and so on, optionally done with a filter on the game object's type. Such a functionality is not special to ItemEffects; it is commonly useable. Notice that this kind of approach is one incarnation of the widely used component based entities architectures.

 

Let's come back to the generalization. A key can in principle be used by the player or any agent of AI. So let's generalize the "player" parameter to be an "actor" parameter and being of the Agent class type. I prefer to use this term here because I think "that agent that performs the action that has this effect is the actor". Next, using a key is not restricted to a door. Let's say that your game object can implement the Lockable interface, and a key is defined to be a game object that can be used to lock and unlock such a Lockable object, or, in other words, the actions "lock object" and "unlock object" can be applied to a Lockable type of object. Next, passing the Level as argument is fine if it is understood mainly as the current collection of services.

 

For me, the above formulation leads to the question "why does ItemEffect::use(…) check for applicability?" The anatomy of an action shows IMHO at least the distinct parts pre-condition and outcome. For a more elaborate system, more parts are needed, of course. Let's say we have a class Action with the methods Action::checkApplicability(…) and Action::perform(…). So the player controller and AI agent can invoke checkApplicability to determine whether to take that action into account. E.g. the player controller may adapt its UI to reflect that that action is not available, so that the player just has no ability to trigger e.g. "unlock object" if s/he hasn't a key or is too far away. Notice please that we have a game mechanics related option here: Do we allow the agent to try to use an obviously senseless action and then react with "haha, stupid one, you are too far away", or else do we prevent this kind of situation? (You remember, I already wrote earlier that such things have an influence.) If checkApplicability allows the action and the action selection mechanism has selected this action, then Action::perform(…) is invoked. Nothing hinders you to do some checks herein, too. This is nonetheless the place where the "try and possibly fail" style of mechanics can be implemented. However, in general the outcome of the action is coded in that routine.

 

The above is already some stuff to think about. However, we still have neglected some aspects. For example, an action may have an immediate or a deferred outcome. For example, the action "open object" applied to a chest should change the state from closed to opened (so the outcome is "is opened"), but the transition should be garnished with an animation. So the state changes from (stable) closed to (transient) opening to (again stable) opened. The state is controlled first by the action, then by the animation sub-system. Another aspect is the enhancement: What happens to "open object" if the chest has a lock, and … perhaps a trap?

 

From all this we can deduce what kind of system I have in mind: A separate world state, a couple of actions as the basis for agents to alter the world state, and the possibility for  literally any sub-systems and services to work with them. I'm currently on my way to implement such a system, so you should not be surprised that my explanations go in exactly that direction ;) But, maybe, those thoughts are of interest to you, too.




#5289569 <OpenGL/gl.h> Include Failing

Posted by haegarr on 01 May 2016 - 09:20 AM

It is not exactly clear to me what build system you are using. It is Xcode itself? If yes, then adding "/System/Library/Frameworks/" should not be needed.

 

However, the frameworks therein are runtime components. The headers are development components - the are not bundled with the runtime components. Instead, they are bundled with the SDKs. For example, the SDK for Mac OS X 10.9 brings such headers

 

Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks/OpenGL.framework/Versions/A/Headers/gl.h

Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/System/Library/Frameworks/OpenGL.framework/Versions/A/Headers/gl3.h




#5289545 OpenGL - Render Plane equation

Posted by haegarr on 01 May 2016 - 03:08 AM

I don't know how your framework linearizes matrices, so this code snippet
   Matrix4 rot = new Matrix4(...) 

may or may not be correct.

 

But there is a noticeable difference in transformation computation between your 1st and your 2nd post. In the 1st post you compute

    Mi+1 := Mi * T * R

with M being the transformation matrix on the stack, T being the translation and R the rotation. The snippet in your 2nd post computes

    Mi+1 := I * R * T

instead, where I is the identity matrix. Notice that (a) the matrix being formerly on the top of the stack is now ignored, and that (b) the order of translation and rotation is exchanged.

 

Hence try to change this line

   Matrix4 result = rot * trans;

into 

   Matrix4 result = trans * rot;

 
Further, if the former stack matrix should play a role, instead of 
   GL.LoadMatrix(result.OpenGL);
use
   GL.MultMatrix(result.OpenGL);



#5289527 OpenGL - Render Plane equation

Posted by haegarr on 01 May 2016 - 01:18 AM

Your doubts are valid, but the reason is wrong. The vertices are placed onto the local x/y plane, hence the normal is along the direction of the local z axis. Putting (0,0,1) into your rotation code and considering that rotation by 360° is identical to rotation by 0°, you correctly see the front of your plane. However, just using normals along x (1,0,0) or y (0,1,0) or so with your rotation code still give you the same vertices, since all transformations are the same. Changing the order of rotation does not make any difference then.

 

Actually, using a normal and a distance lets several degrees of freedom in an open state. Rendering a plane with vertices need those degree of freedom to be fixed. What I mean is that you need a definition of

a) an origin of the plane, so that there is a point with local co-ordinates (0,0,0), 

b) an orientation angle around the normal (think of the rolling angle of a camera; the normal gives you just 2 of the 3 angles).

 

You solved a) by using the point

   origin := (0,0,0) + distance * normal

 

You solve b) by just picking an angle, i.e. so that rolling is zero.

 

So with these fixes you have a position and an orientation as a direction and "no rolling". Let's express "no rolling" as

   up_vector := (0,1,0)
And instead of a pure direction, let's use it as difference vector from the origin to a tip point

   center_point := origin + normal * 1

 

Now (with the origin and the center point and the up vector), what we have here are the ingredients for the gluLookAt function! Although being described as a function to compute a view matrix, the function actually just builds a transformation matrix with a translation and rotations so that the z axis points in a specific direction.

 

The math behind the gluLookAt function isn't complicated and leaving OpenGL's matrix stack behind is ever a Good Thing TM, but I assume you may be happy with the sketched solution. Otherwise feel free to ask :)




#5288428 Engine Subsystems calling each other

Posted by haegarr on 24 April 2016 - 06:28 AM

The inconvenience of passing a pointer around is not a sufficient reasoning for tolerating the evilness of globals or singletons!

 

It is common that sub-systems depend on other sub-systems. These dependencies should be directed and acyclic, i.e. a lower level sub-system should never ever call a higher level sub-system. Moreover, a sub-system is not necessarily represented by a single class; from a naming point of view, a "system" is always understood as elements that work together to collectively fulfill a specific task. Graphical rendering, for example, is often implemented as a process passing through several layers and done in several steps: Iterating the scene and collecting renderable entities (perhaps restricted to a specific kind), culling them by visibility, converting them from high level "models" to meshes and textures and shaders, sorting them by some performance criteria, and then sending them to the wrapper behind which the 3rd party graphics API is hidden.

 

I understood the Map class that is mentioned in several posts of the OP as a game level. As such it is a high level data structure (not a sub-system at all, of course). The wrapper for graphical rendering API is a lowest level thing. Hence giving the wrapper a Map instance is already a design flaw.

 

In the end, when a well defined structure of sub-systems is instantiated, pointers need to be passed around for sure. After instantiation, the sub-systems know where their serving sub-systems are, and the need for passing pointers exists just for runtime varying things.

 

Just my 2 cents, of course.




#5288395 Help with python :(

Posted by haegarr on 23 April 2016 - 11:49 PM

A key in an associative array need to be defined. In your case the key is given by a variable which itself is not defined.

 

a) The key defined by a string literal:

John_Smith = { "name": "John Smith" }

print(John_Smith["name"])
b) The key defined by a string variable: 
name = "name"
John_Smith = { name: "John Smith" }

print(John_Smith["name"])
print(John_Smith[name])



#5286331 how to find x,y,z of a point in a sphere if I'm in the center of a sphere?

Posted by haegarr on 11 April 2016 - 11:44 AM


Very true. What I'm not getting is how the angles multiplication does its magic.

Well, there is no angle multiplication. The sine and cosine functions compute the lengths of a unit-vector as projection onto cartesian axes, where the vector is rotated by the given angle. The multiplication is done because we have 3 dimensions. Again, this is what makes spherical co-ordinates. For a circle we need an angle (and a radius), and for a sphere we need 2 angles (and a radius).




#5286315 how to find x,y,z of a point in a sphere if I'm in the center of a sphere?

Posted by haegarr on 11 April 2016 - 10:18 AM

In fact, the calculation of glm::vec3 direction is just an incarnation of standard conversion from so-called spherical co-ordinates into cartesian co-ordinates. The spherical co-ordinates are given by the 2 angles and the radius being 1, so that you have a point on the surface of the unit sphere. The "direction" is then the vector from the center of the sphere to the point on the surface, but now expressed as (x,y,z) length triple.

 

The actual magic, so to say, happens in fact when calculating the both angles from the mouse position; but that stuff isn't shown in the OP.




#5283606 Location tracking across scenes...

Posted by haegarr on 26 March 2016 - 02:20 PM

I once had that problem, too. I solved it the following way (described here using matrix transformations for column vectors):

 

An exit is a collision area with an own placement CX. It is linked to an entree which also has an own placement CE. When an actor enters the exit area, then the actor's global placement Ag is transformed into the exit's local coordinate system as

 

   AL := CX-1 * Ag

 

This placement, although being computed local to the exit, is directly re-interpreted as placement local to the entree, i.e.

 

   AL = CE-1 * Ag'

 

resulting in the placement behind the entree being

 

   Ag= CE * CX-1 * Ag

 

Here the placement C of an exit or entree could be modeled using translation, rotation, and scaling as usual

 

   C := T * R * S

 

so that different scales, orientations, and positions can be considered when desired.




#5273330 Order of transformation confuse

Posted by haegarr on 30 January 2016 - 03:36 AM

The matrix product is associative. That means you can put (decently) pairs of parentheses around the individual term at your desire. Now, when you choose parentheses so that you start with the product including the vector, e.g. like so for column vectors (i.e. the vector is on the right and the matrix on the left, typical for OpenGL)

    M1 * M2 * ( M3 * v )

and continue to the outer terms, like so

   = M1 * ( M2 * ( M3 * v ) )

then the parentheses tell you about the locality of the transformation. The wording for this is often "first apply M3, then M2, and then M1", although that wording is imprecise.
 
Notice that this works for row vectors as well
   ( ( v' * M3' ) * M2' )* M1'
 
So, if you want to apply scaling to the model, rotate the scaled model, and translate the rotated/scaled model,** and you use column vectors, then the formula with parentheses looks like
    T * ( R * ( S * v ) )
what, obviously, corresponds to your 2nd variant of code.
 
**Notice please that I've chosen a wording here that is not as blurry as "first … then …" ;)
 

Opengl reads in reverse order ...

As said above, "order" without further context is misleading here. For example, you can calculate

    ( ( M1 * M2 ) * M3 ) * v

as well and get the same result.





PARTNERS