• Create Account

# haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 10:34 AM

### #5121355Linear collision question

Posted by on 05 January 2014 - 05:22 AM

Lets say that a line consisting of these two points A (108, 157) and B (144, 132) exists and that we are trying to find if another point collides with said line. If I'm understanding this correctly, I would want to create a perpendicular vector from this initial line to the point of interest, calculate the magnitude of this vector, if positive its on side A and if negative its on side B - resulting in collision. Is this correct? If this is not correct (albeit - perhaps even far off) would / could you provide me with a trivial example to demonstrate how the perpDot would be used correctly in this situation?

You got it.

The edges are given in order left to right, and their respective endpoints are also ordered this way. This allows you to determine line segments and write them down as limited ray:

sn( t ) := pn + t * dn ,  0 <= t <= 1

dn := pn+1 - pn

where pn denotes your points[n] array element at index n. Notice that using t==0 gives the start point, t==1 gives the end point, and 0<t<1 gives any point in-between.

Now let us use the perpDot to compute a perpendicular vector to d. There are 2 possibilities, and it makes a big difference whether we use the one or the other. We use the ceiling of our room and define that the "positive" side should be below the ceiling. Hence we want the perpendicular vector point more-or-less downwards. So we pick

vn := [ dny, -dnx ]

as the perpendicular vector.

With this we could formulate an equation with which we want to reach the point of interest c (i.e. the one to check for collision):

pn + t * dn + u * vn = c

Now the trick is this: If u is greater than 0 then c is on the "positive" side and does not collide, but if u is 0 or less it is on the "negative" side and does collide. Computing u mens to map the difference vector from the start of the edge to the point of interest onto the perpendicular vector. This is done using the dot product:
u = ( c - pn ) . vn

So the steps to determine on which side the point of interest is w.r.t. to an edge are:

1.) get the 2 points from the array

2.) compute their difference vector d (take care or order)

3.) compute the perpendicular vector v

4.) compute the difference from the edge's start to the point of interest

5.) commute the distance u

6.) assess the distance with 0 as border

Notice that the floor is different, because you want to assess the point of interest to be above it (for the ceiling it was below). You can do so by either use the other perpDot result or else say that an u below 0 means no collision and greater or equal to zero means collision.

(BTW: I've written this down from the top of my head, so please check it thoroughly.)

I also just realized that the link containing the photo image of this scenario isn't able to be used in this thread for some reason. Is it easy enough to understand the action I'm looking for without the visual?

I personally had not imagined your problem correctly before I had a look at the picture. I've investigated the HTML source and found the link by searching the word "apparently". For sure that is not convenient ;)

If you want to put a link in a post then write down the link text, mark it, press the chain symbol in the editor's toolbar, and paste the URL into the opening requester. That should do the trick.

### #5121106Linear collision question

Posted by on 04 January 2014 - 05:29 AM

The immediate problem is in line 114: The probability that two independently computed float variables are exactly the same is low. You must not check whether the point of interest is on the edge, but instead check whether the point of interest is on this side (okay) or on the other side (collision) of the edge.

One approach to the said check is to determine the candidate line segment (perhaps so as you did; I haven't investigated that in detail), to compute its normal (keyword: perpDot), and then compute the distance of the pint of interest from the line segment along the normal. Whether the point is on this or the other side is defined by the sign of the distance.

### #5121100Animatian data: disk versus memory

Posted by on 04 January 2014 - 05:02 AM

What is the use case the OP has in mind? I assume a runtime of a game (i.e. not a tool, test, scripting, or similar; otherwise give more information).

1.) Using an ASCII file is not the wisest choice. Game data should be stored in binary files.

2.) 20.000 frames at 60 frames per second means an animation duration of 5.5 minutes. This is for sure also enough time to let some disk I/O operations happen without destroying the overall performance. Of course, reading the data in chunk is still meaningful. I don't assume the necessity for random access, of course.

3.) On the other hand: 5.5 minutes … what should that be, a game or a movie? Is it a really expected number?

4.) When baking the binary data, regression can be used to find smooth sections, so that key framing and interpolation are possible to reduce the amount of data.

5.) 7 times 50.000 floats, each float 4 bytes long, cover 1.4 million bytes. This is hardly a problem "with todays common hardware" (assuming that we're speaking of gamer desktop systems; if you mean something other you have to specify) by itself. Of course, you told us nothing about any other memory burden, so you again may give more informations.

### #5121093Programming a walk cycle

Posted by on 04 January 2014 - 03:53 AM

Sliding can be seen where the feet are in contact with the ground.

Reason A: The animation is made so that the distance between feet is not constant although both are in contact with the ground.

Reason B: The speed of the center of mass does not correspond to the animation and its playback speed.

Now the question is how the animation system works. Is WALK_SPEED a constant or a player controlled variable? Is animation blending in use? Does the movement depend in any way from the ground and its slope? Is some locomotion system in use?

Okay, probably no locomotion system and also no animation blending. Under such conditions sliding due to reason A could of course not be eliminated but only minimized until you are willing to rework the animation poses themselves.

The suggestions made by ankhd and KulSeran can be used for both reason A and B. Both suggestions do not really prevent from sliding but reduce it to an amount where it is hopefully no longer visible (in the case of reason B). I.e. they don't guarantee that the contact point of a food on ground is preserved. The suggestion of ankhd has the lowest impact on code, while KulSeran's suggestion allows for finer control (i.e. positioning in dependence on the animation's phase).

A full solution would be to use a locomotion system where the contact points are under control and the body's position is computed backwards from a contact point, eventually utilizing inverse kinematics when dealing with several contact points. This however would require far reaching changes in both the animation data and the animation system, and hence seems not an option because it would be too costly.

### #5120661300 000 fps

Posted by on 02 January 2014 - 04:51 AM

and should measure it in a differend way?

This in the first place! FPS is a reciprocal measure and as such not useful if the range becomes bigger than some ten FPS, perhaps up to e.g. 100 FPS or so. The 900 is already a less meaningful number. Its better to use a linear measure: Compute the mean time per frame over a couple of frames as an absolute measure, and perhaps a percentage between such values for a comparative one.

Also: A cube is most probably not a meaningful test at all. In a real world you may encounter several limits: DMA transfer, texture sampling, pixel fill rate, shader complexity, …; none of them is in peril with a cube example (assuming you don't mean a Koch cube ;)).

Regarding the question of performance boost itself: OpenGL 1 is very, VERY old. None of the (more or less) modern techniques was supported. If you use a modern OpenGL it is much more adapted to existing graphics cards. So yes, it is principally possible, of course.

### #5120649Open GL Questions?

Posted by on 02 January 2014 - 03:19 AM

I have started to gain a strong foundation Open GL and would like to become more advanced with Open GL. But i am still not a level where i can begin 3D game development.

Just to remember: OpenGL is a low level graphics API, implemented by your graphics card driver. A game consist of much more than graphics. So, mastering graphics helps with the "eye-catcher part" but doesn't automatically mean that you can write a game.

5). If i include input and i want Open GL to move the model, would the algorithm go something like this:

A game loop is build differently from an application event loop. While the latter one is constructed with the intention to let multiple processes and applications running at a time, a game is intended to take over the machine's resource for best performance. This means especially that it usually does not wait for events (i.e. there is no idle state) but simulates the world all the time. Search the forum and the internet for the keyword "game loop" to find more details.

5) You don't tell OpenGL to move things. Generally, you would update your model matrix with the new position for your model, then that new position would get reflected on the screen when you multiply model * view * projection.

This,  except that the order should be "projection * view * model" in OpenGL.

### #5120076Is OBB always the minimum bounding box?

Posted by on 30 December 2013 - 12:12 PM

AFAIK:

An OBB is a cuboid that may be axis aligned but need not to be (to distinguish it from the special case AABB). As such any cuboid used as BB is an OBB. Assuming that the MBB is also a cuboid, the MBB is an OBB but an OBB is not necessarily also the MBB.

Looking at the methods to compute an OBB, e.g. the one by Eberly tries to iteratively approximate the MBB. Other methods, like the simple principal component analysis, may definitely produce sub-optimal results.

### #5120014View matrix - explanation of its elements

Posted by on 30 December 2013 - 05:21 AM

First of: I agree that both sights are valid (although IMHO my arguments show that "its a camera in the world" is even more suitable). But I disagree with your argumentation for declaring the one sight being a misconception.

The problem with placing the camera in the world is that you don't use the same transformation matrix as when you place any other type of object in the world. So on that regard, it is not just like any kind of object.

When one places a camera in the world one uses translations and rotations to describe where positioned and how oriented the camera is w.r.t. the global co-ordinate system.  This already sounds from the phrase "placing a camera in the world". How you build those transform is convention: Whether your camera controller pitches the camera down or up when moving the mouse forward is a convention. Whether you compose the view matrix directly instead of the camera matrix plus afterward inversion is an implementation detail.

An example: I create a path onto which a Frenet frame is animated. I may attach an arrow to the frame to let an RPG character shoot with bow and arrow. I may alternatively attach the camera to it because I want to render a cut scene. Now, is the camera distinct from the arrow from the path's point of view? No! The animation system will write exactly the same transform to the target. I'd not came to the conclusion to let the animation system make a distinction between a camera and something else. There is not only no necessity for that, it were even more complex. Later on, the rendering system as the one which knows what "camera" means uses one of them, computes the view matrix, and does its thing.

Another example is CES. A camera can be implemented as component which brings field of view and the far and near clipping planes into an ordinary entity. The entity has, as every entity that is placed in the world, a Placement component as well.

Another example is a third person camera. It is attached in a forward kinematic way to the PC (a.k.a. "parenting").

All these examples work fine with "camera in the world", and IMHO they trouble-freely unify the sight onto things.

I agree that you can use any reference system (view space, word space, screen space, etc). But you can't use more than one camera. That is, if you have more than one camera, you render them one at a time. E.g. creating stereoscopic views, shadow maps or cube maps.

Yes, of course. But it means that you create a situation where one camera is stationary and all others are not, and declare that as special, although it is as special as any of the situations using another camera, too.

### #5120001View matrix - explanation of its elements

Posted by on 30 December 2013 - 03:40 AM

A camera can be seen like any other object in the world, and as such it can be placed in the world.

That is a common misconception, which makes it hard to understand. Actually, the camera is stationary! You move the whole world (using the view matrix), and you turn the whole world, to get what you want in front of the camera. Looking at it that way, the transforms are obvious.

Well, no, it isn't a "misconception". It is a legitimate view onto the things. It may seem that a view space is something special but it isn't.

* Can a camera be placed in the world like any other object? Yes, it can.

* To what is a camera stationary? It is stationary to ... itself!! But that is true for any object in the scene. If you chose a reference system where you sit down, then it becomes stationary for you.

* What if you have 2 cameras in the world: How could saying that both are stationary is more intuitive?

* Is a light also special / stationary because one transforms into its local space when computing a shadow volume?

* Is an ellipsoidal shape special because, when raytracing it, one transforms into its local space where it becomes a sphere?

* Is the world special because one does collision detection within?

Mathematically there is nothing like a natural break point in the chain of transformations from any local space into the screen space. One chooses the space where a given task is best to be done. For sure, the best space for screen rendering is not the world ;)

### #5119792View matrix - explanation of its elements

Posted by on 29 December 2013 - 05:36 AM

Problem is that you need to understand half a dozen concepts to have all of your questions answered. That is too much to be explained in detail here. I'll try to give the basics, so you have some hints to search the forum and the internet in general.

1.) A camera can be seen like any other object in the world, and as such it can be placed in the world. Placement is done by using a transformation that explains how to convert a point in the camera local space into the global space. Notice that a point in local space and its converted pendant are actually the same points; the difference is the reference to which the co-ordinates of the point are given. Notice that the transformation is used to convert from local to global space. Clearly, there should be the other way, too, namely converting from global space into local space. Those two transformations are the mathematical inverse of each other, because going from the local space into the global spec and back into the same local space should yield in the original point co-ordiates.

With the above in mind, a camera matrix is normally the transformation from camera local into global space; the same as a model matrix is for a model. But the view matrix is normally called its inverse, i.e. the matrix to transform from global space into camera local (a.k.a. view) space. So you must not confuse camera and view matrix (as you did in the OP).

2.) A vector may denote a position, a difference (between 2 positions), a direction, but not a distance. A distance is a scalar value. A distance can be computed as the length of a difference vector.

3.) You must be aware that vectors can be written in a row or in a column. This makes a difference if you think of the matrix product. E.g. if you have a matrix and a vector, you are able to compute

matrix * column_vector or row_vector * matrix

but you re not able to compute

column_vector * matrix or matrix * row_vector

The operator to convert between column and row vectors is named "transpose".

What you written down as "view matrix" looks like a row-vector matrix. But the product you've done later has the structure suitable for column-vectors. Using them together is wrong!

4.) The w component, better named the homogenous component, is a trick (well, not really, but it seems to be). Notice that both scaling and rotation are multiplicative operations that can be applied to positions as well as direction vectors, but translation is an additive operation that can be applied to position vectors only (yes: a direction vector has no concept of positions!). Concatenating several operations is not possible without already considering the vector.

Now with using a homogenous co-ordinate it is possible to unify translation and rotation/scaling because the homogeneous co-ordinate makes an explicit distinction between positions and direction.

### #5119637Design for a Dialogue System in a RPG

Posted by on 28 December 2013 - 04:32 AM

(1) There is a class Soldier. It should not be necessary to sub-class Soldier, Mage, Thief, Peasant, ChickenOnStreet, ChickenInStable, ChickenInSoap, and so on. From the chosen class names, being intentionally provoking when coming to chickens ;), you can see that it would cause an explosion of classes. What happens if a class Dog will become necessary due to story development later? Will its integration break the game because from the many places where the class' type plays a role one for overseen when adapting the code?

Notice that trying to generalize classes (I make a reference to the mentioned class NPC) will not rescue you! That has been proven by many trials in the past. The typical effect would be shifting functionality into the base classes although just being necessary in a sub-set of derived classes only.

(2) The class Soldier/NPC already inherits 3 classes/interfaces: Sprite, Collidable, Talkable, maybe there is already a bunch hidden in Sprite, like e.g. Moveable and Drawable. It is better to think of Soldier/NPC, not as a concrete "is-a" but an more-or-less abstract collection of "has": It has a Sprite as visual representation, it has a Placement, it has a CollisionVolume, it has (okay: can) be talked to.

(3) The obvious "misconception" in the sense of (2) is the fact that Soldier/NPC contains some variables dealing with the fact whether it runs a quest. Maybe it originates in Talkable, although also that would not be 100% okay. However, Talkable is an interface and hence does not have member variables. So you have to implement those variables (and functionality at all) for each and every class that inherits Talkable. What a mess in an RPG where optimally each and every character, perhaps also environmental life (hey, we are in a game here ;)) should be Talkable. The better solution would be a class Quest and a member variable in Soldier/NPC (or whatever) that points to a concrete instance of Quest if the Soldier/NPC instance actually has a quest. So the Soldier/NPC would be Talkable anyway, may or may not have a Quest, and the Quest (if any) may or may not be solved (which is then stored in a member of Quest, not in a member of Soldier/NPC).

So, with regard to your 1st question: As already drafted, the common solution to the above problems is to use composition over inheritance. The second level is to follow further the data-driven approach. This will become more obvious when dealing with different quests and dialogues: You don't want to sub-class Dialogue for each different talk to may have.

With regard to your 2nd question: I'm stumbled about the name "charactersToAdd" in this context. Where are the characters to be added? They are already all in the scene, aren't they? They are not all to be added to the talk, are they? So, what does "to add" mean?

Regardless of this little confusion, I don't see an overkill. What may happen is that the collision check becomes slow if really many, many characters populate the world. Then it may become meaningful to have a broad phase collision detection prefixed. It may also be meaningful to separate collision detection from dialogue mode activation, namely if the same collision can be used for other reactions, too. In such a case first collecting all detected collisions and then deciding what to do with the is probably more efficient.

Another thing is that you have to control the situation where another character is bumping into your avatar during a talk is already in progress. Either you suppress collision detection with the purpose of talk initialization, or you may use the mechanism described in the following paragraph to expand the round of participating talkers.

I'm not sure whether character.setTalk(…) is the way to go. I think it would be better to have an external instance of Talk where all participating characters are bound to. They may refer back to the Talk instance as long as participating, so they are tagged as talking and the other participants can be found easily. The instance of Talk would further be a suitable place where to track how and how far the talk has progressed (in conjunction with a Dialogue instance).

With regard to your 3rd question: What exactly do you mean? IMHO you cannot plan the story in all details before starting game programming. You should have planned completely what features should be available (like: how are quests structured; that dialogues can take place; that sword fighting can happen; that the day-night cycle is considered; …). To become somewhat independent on the story details, using mechanisms like composition and data-driven is suitable.

Posted by on 22 December 2013 - 04:45 AM

What about the cache performance of more complex game systems? For example: an object with a Health component is hit by one with a Damage component. If the components are updated by type, then when we check collisions for all the Health components, we also have to check how much damage to apply (from the Damage component attached to the game object that hit the Health component's object). Is this still a predictable enough access pattern for the CPU cache? Or would jumping back and forth between accessing the Health and the Damage components cause lots of cache misses?

IMHO this cannot be answered clearly because "it depends"! How much does it cost to bring data in a well defined order opposed to what performance can be gained?

Let's think of an FPS. There are up to 10 characters, damage is applied suddenly by shooting. Your game loop runs 60 times per second. Even if each character would shoot 2 times per second you get an average of 1 shot per 3 loop iterations. That is hardly something worth any effort for optimization. That doesn't mean that e.g. Health isn't stored well organized in a table inside a Soundness sub-system.

Now let's think of an RPG with 80+ NPCs, and poisoning, illness, bleeding, curses, and what else that can be applied by both the players as well as NPCs. Further there are buffs and de-buffs and such. In other words, damage may occur not (only) suddenly but lastingly ((or need it to be "lasting" only?)). We can think of tables for health values as well as different sorts of lasting damage, hosted in a damage sub-system and ordered the same way, and being processed on an update() invocation basis. This still will probably not make much a difference in performance, but it is relatively easy to implement that way.

Notice that the detection of sudden damage should not belong to Damage and Health directly but to the more low-level CollisionVolume stuff. Damage is then an effect caused as collision response, so to say. If you need to decouple collision detection from damage application, you can again use queues that will be written by the collision detection and read by the aforementioned update() method.

My 2 €-Cents.

Posted by on 21 December 2013 - 05:36 AM

a.) Don't use DOD just because it is "nice". If it doesn't fit, use something else. Make the decision on a per-problem basis.

b.) Ideally you apply the same execution (read once from memory and then often from instruction cache) onto sequentially organized data (some reads from memory and a great percentage of data cache hits). When you have inhomogeneous data, e.g. the command stream for the low-level rendering system, i-cache hits may be less frequently, but storing the commands as blobs utilizing a stack/linear/sequential allocator helps greatly for data cache hits.

c.) An event oriented design is data-driven but not data-oriented, so to say. You cannot handle events by sorting by type, because events usually need to / should be processed in timestamped order. However, treating them as blobs and using a ring buffer allocator as storage still allows you to use an approach very close to those introduced in b.)

d.) Consider to drop the use of an event system if possible. An event system is a tool to deal with asynchronous, err, events. A game is an application that can be well structured (see e.g. Jason Gregory's book except on Game Engine Architecture about the game loop) in an overall synchronous way.

e.) The above has another effect, too: You can rely on certain sub-systems to have completed when calling other sub-systems to begin, weakening / removing some need for high-level synchronization.

f.) Parallelism can be done using vector processing (SSE, GPU) on the one hand or multi-threading on the other hand. Vector processing does not need more synchronization than scalar processing, but depends heavily on data structuring.

g.) Multi-threading, on the other hand, is a totally other beast. I second the meaning that multi-threading is complicated in general. However, multi-threading can be better controlled if being done for example so: The main thread does nothing than collecting input events from the OS / hardware and pushes them into a synchronized pipe (this is mainly done so to guarantee that no input gets lost, and some OS's require their event loop to be processed on the main thread). Another thread runs the game loop (inclusive reading input from said input pipe), and at last as a result fills in a render command queue. Another thread reads the render command queue and feeds the graphics API. To be efficient, the render queue is double buffered, of course.

### #5117776Why are vertices represented as 4f, but normals and light positions as 3f?

Posted by on 18 December 2013 - 03:04 AM

To develop the above posts a bit, vectors can be used for positions, differences between positions, directions, and normals/tangents/bi-normals/bi-tangents.

* A position vector denotes, well, a point in space.

* A difference vector has no beginning and no end in the sense of positions, it has just a direction and a length. This may be confusing, but look at it so: It is easy to find two different pairs of positions where the difference vectors are identical; you cannot tell which one resulted from which pair of positions by looking at the vectors components.

* A direction vector is a vector with its length set to 1 (unit length), so that the vector still has a direction but no distinguishable length. A difference vector can be made to a direction vector by "normalization".

* A normal/... vector is a direction vector with the constraint to have a specific angle to a line, surface, and/or other vectors.

In a homogeneous co-ordinate system a position vector has the homogeneous co-ordinate, say w, set to a value unequal to zero, where w==1 denotes the normalized case (all cases can simply be converted to the normalized case by dividing by w). All other vector kinds have a w==0. In an affine co-ordinate system the w is implicit (you as the programmer have to remember and think of which kind of vector you're dealing with). In a homogeneous coordinate system you have w as a helper, but still need to remember and think of the special constraints on normals/tangents/... as mentioned by Álvaro.

I'm sure (not to say I'm hoping) that the examples you found on the internet do consider this in the one or other way.

### #51176742D vs 3D Camera transform matrix

Posted by on 17 December 2013 - 03:18 PM

There is no dependency on dimensionality with regard to the order of transformations.

In 3D space, to obtain the camera view transform matrix, I believe it is scale * rotation * translation matrix...

Such statements are meaningless as long as

(1) one tells whether column vectors (like e.g. typically in OpenGL) or else row vectors (like e.g. typically in D3D) is used, and

(2) one tells what exactly the "camera view transform" means, and

(3) one tells what effect should be yielded in.

Usually, the camera transform is called what places the camera object into the world, i.e. the transformation from the camera local space into the global space. Also usually the view transform is the inverse of the camera transform, i.e. the transformation from the global space into the camera local (a.k.a. view) space.

With the above definition and the usage of column vectors, a camera transform is often build up as

C := T * R * S

what you will call

translation mul rotation mul scaling

but actually means

scaling on the mesh, rotating the scaled mesh, translating the rotated scaled mesh

The corresponding view transform is then

V := C-1 = S-1 * R-1 * T-1

where, if you stores the matrices as view matrices, you say

scaling mul rotation mul translation

Now, the same game with row vectors gives you

C := S * R * T

so that the order is reversed. This is true for all derived matrices, but the meaning is left as is! E.g. the above C actually still means

scaling on the mesh, rotating the scaled mesh, translating the rotated scaled mesh

This is the difference of column vs row vector math.

In the end you see that both orders are valid in both systems, depending of what you speak of. Furthermore, you can think of applications that are not as easy as the composite of 3 matrices above, giving perhaps other orders in both systems. Whenever you read about transformation matrices you must realize the convention used in the reading, or else you cannot exactly interpret what you read.

Hope that helps. Matrix math is a bit confusing if the caveats are not known.

PARTNERS