• Create Account

# haegarr

Member Since 10 Oct 2005
Online Last Active Today, 09:55 AM

### #5140056Getting a direction constant from normalized vector

Posted by on 18 March 2014 - 10:48 AM

I'm very interested in the OP's meaning of simplicity … hopefully s/he will give a feedback in this thread. In the meanwhile, I wanted to hint at quirks in the answers above, because some of the answers are misleading. Please don't feel offended; I simply think the OP should get reasoned answers.

@Waterlimon: Comparing the effort of a single dot-product with a trigonometric function is unfair because an entire max search for N directions using the dot-product in 2D requires N*2 MULs, N ADDs, and N CMPs. This means there are 16 MULs and 8 ADDs to be compared with 1 trig function (when using the max search in both cases). Your wording lets one assume that there are 2 MULs and 1 ADD to be compared to N trig. functions. Nonetheless, your method may still be faster for the given use case though ...

@papalazaru: Is it really simply if one needs to stare 2 minutes onto 5 lines of code until understanding it? Is it really simple if you use the word "quadrant" although you compute 6 and 2 sections, resp.? Is it really simple if you need 12 elements in the mapping, although 8 are expected? And why do you use 30° and 60° as limits, which causes obviously not an equal quantization w.r.t. the angle, without any textual hint about that? And what is the check (x >= 0.0f) good for?

@aggieblue92: Please tell me why using the dot-product has the advantage of being scaleable; compared to what other method? Each additional resolution step doubles the amount of MULs and ADDs, while using the atan2 function always costs the same (because exactly 1 call is needed). Moreover, your code snippet runs a loop, and inside the loop you are computing the sine and the cosine in dependence on the loop argument (okay, you actually do not, but that is an error in your code; intentionally you use an altering angle argument). This is hardly an optimization, isn't it?

### #5139472mat3x3 array issue

Posted by on 16 March 2014 - 10:55 AM

I have no experience with OpenGL 4+, so may be my knowledge is obsolete. The following is what I think:

1.) The standard layout for a 3x3 OpenGL matrix in memory, as long as row_major layout isn't used like in the OP, is

[ mc1r1 mc1r2 mc1r3 0 mc2r1 mc2r2 mc2r3 0 mc3r1 mc3r2 mc3r3 0 ]

But a vector of glm::mat3, as used in this line

`std::vector<glm::mat3> matrix(8192, glm::mat3(1.0f));`

probably has the zeros left out. I suspect the qualifier "packed" to not help here. You should debug and inspect matrix.data() for its content.

2.) If you use a 2D position and a 3x3 transform matrix, then I would assume that the 3rd co-ordinate is the homogeneous one. If you want it use that way, then this line

`gl_Position = vec4(vec3(pos, 0.0f) * matrix[gl_InstanceID], 1.0);`

need to be changed into this line

`gl_Position = vec4(vec3(pos, 1.0f) * matrix[gl_InstanceID], 0.0).xywz;`

or else translation will not work (please double check). The line computes a 3x1 by 3x3 transform, an reorders the result into a 4x1 homogeneous vector.

### #5139192Point-and-Click Adventure - Need help / opinions

Posted by on 15 March 2014 - 04:17 AM

There is nothing mysterious about it, and differences between text adventures, point-&-click adventures, and 3D RPGs are not existent at that level. You are in fact right with your assumption about the versatileness of the chose approach. However, there are some details of interest.

The world state is the overall value of all variables that define each and every dynamic aspect of the world; e.g. whether a door is locked, whether a gate is open, whether a NPC is dead, whether a quest is solved, but also non-boolean aspects like how many gold coins are at a given place, the room number where the PC currently is, and so on. The word state can exist distributed, but saving and restoring a game is much easier if it is concentrated instead. If the gameplay is hardcoded, then the world state may be an object with a variable for each aspect. If it is data driven, then it may be a map (i.e. string key to value object).

The values of the world state have a type like boolean or integer, a name (either the hardcoded variable name or the string key), but they have no explicit semantic. Instead, the semantic is given by the use. If the action "open the door" has a condition that checks a boolean state variable and denies opening the door if the variable state is false, then the variable has the meaning of "the door being locked/unlocked". If the guarding condition of the action resolves to true, then  the action can taken place. In this case, the action alters the state of a boolean variable with the meaning of "the door is open". Notice that actions have a pre-condition (of arbitrary complexity) and an outcome. The pre-conditions checks a part of the world state by comparison to a needed state, and the outcome alters a part of the world state.

You may use a kind of planning. For example, the action "go through door" has a pre-condition that requires the door to be open. Let's say the door is closed at the moment. The game may reject the action with the comment "the door is required to be open". So the player issues an "open the door" action. Well, that action is guarded by a pre-condition that requires the door to be unlocked. Let's say it is locked. So the game rejects the action with the comment "the door need to be unlocked first". This can be continued by the need of having a key in the hand and the need of the key matching the keyhole. However, the game may also have some automatisms that beware the player from such micro-management. This is where planning comes into play.

Let's say that there is a pool of pre-defined automatic actions. The player issues the aforementioned "go through door" action. The game determines that it is not possible. So it first looks into its pool of automatic actions and looks for an action with an outcome that would alter the game state so that the issued action is possible. It finds the automatic "open door" action that would do. However, the pre-condition of that automatic actions requires an unlocking, so the game again searches the automatic actions for one, this time one with an outcome that would generate the "door is unlocked" world state. Only if the game could not find a sequence of actions that in its entirety would do, it rejects the originally issued action and give a comment (usually based on how far the search for automatic actions was successful).

Now something else: For a one man team, a one game goal, and the demand of getting the game done, it is mainly okay to do things like the above hard-coded. A more flexible way would be to implement a data driven approach. This has the following advantages: It separates programming and designing the game a bit, so it is easier some people to work on the engine and others to work on the game content. It allows further to use the game engine (with some modifications) also for the next game of the same kind. But is is definitely a harder job for the programmer to get things right, because it is another level of abstraction. But I wanted to mention it here.

### #5138898TBS and FPS

Posted by on 14 March 2014 - 04:09 AM

You should consider that a game engine is a compound of sub-systems like graphics, sound, input, network, storage, and perhaps other foundation systems, simulation support like animation and physics, a scene management like a scene graph or component based entity system (CES), AI if NPCs or environmental life plays a role, perhaps a quest system or some story driving backend, and … I may have forgotten something. Even if you don't find a game engine that fits your needs in all these topics, you can find engines (or "libraries") that implement the one or other sub-system. So you will not need to create things from ground up, but to implement game logic and glue between existing sub-systems.

Posted by on 13 March 2014 - 12:13 PM

The most official documentation, so to say, is the specification. It is maintained by the Khronos Group, especially in their registry for OpenGL, where you can find the PDF about GLSL 4.40 (besides all other documents, also former versions). Jump down to page 171 in that PDF.

### #5138637Calculate relative rotation matrix

Posted by on 13 March 2014 - 03:06 AM

The above answers are correct in principle, but there is a caveat. The answers assume that column vectors are used. If the OP uses row vectors instead, then the order of the both matrices need to be reversed to what is shown above. Neglecting the normally needed transposes, things look as follows:

Column vectors:

MB * vB = MA * vA  <=>  MA-1 * MB * vB = MA-1 * MA * vA  <=>  MA-1 * MB * vB = vA    <-- the solution shown above

Row vectors:

vBMB =  vA * MA  <=>  vB * MBMA-1 = vA * MA MA-1  <=>  vB * MB * MA-1 vA    <-- notice the reverse order of matrices compared to other solution

### #5138132Skeletal Animation System

Posted by on 11 March 2014 - 10:26 AM

You have an architectural problem: It is wrong in principle if a drawing routine has to update the model. When it's time to render, all animation, physics, collision resolution stuff has been done and a stable situation is reached. At best the skinning may be done then, although I would do so only if the skinning is on-the-fly on the GPU.

So, the solution to animating a skeleton (or any other animatable variable) is to have an animation sub-system. The sub-system manages all of the currently active animation tracks. A track is bound to (e.g.) the orientation part of a bone, another track is bound to the position part of the bone (or, if you use combined tracks, simply to the placement of a bone). When the runloop invokes the update() method of the animation sub-system, the active tracks are iterated, the respective key values are interpolated, and the result is written to the animated target.

In practice this is a bit more complex, because of the need for animation blending and perhaps animation layering. (We have a thread here that discusses 2 possible technical solutions and some background.) In the end you need to compute a weighted average of position and orientation for each bone, where choosing the weights decides on blending or layering.

After all animation tracks are processed, also each bone has its current local transform set. The next step is then probably to compute their global transforms w.r.t. the model's space.

You still may want to implement an explicit Skelton::update(…). Notice that this means to have a location where the animation tracks for a particular skeleton are concentrated, but in the end it is important to obey the overall sequence of updates when processing the runloop. You may read this book excerpt over there at gamasutra about the need of a defined sequence of updates. Furthermore, with an animation sub-system that deals with animations in general you no longer have the special case of skeletons but use the system for animating other variables, too.

### #5138073Spritesheet Algorithms

Posted by on 11 March 2014 - 06:06 AM

As said earlier, you should try to move the responsibility for such a thing into the toolchain. So a sprite sheet (generally a texture atlas with a single LOD) consists of both the texture and a list of texture clips. Each clip is named (for retrieval) and provides the sequence of texture co-ordinates.

When the sprites are delivered e.g. in a PSD file with several layers, then the toolchain can use e.g. a shrinking frame based on alpha (or color key) to determine the smallest axis aligned bounding-box (or an octagon box) on each layer, and pack them into an atlas using one of the bin packing algorithms. The name of the layer can be used as name for the clip.

When the sprites are delivered e.g. in a single layer of a PSD file, and the sprites can be separated by axis aligned non-overlapping boxes, then PSD's slice mechanism can be used. Each slice is translated into a clip. Here the artist is responsible for the correct packing.

If all of the above don't work for you: When the sprites are delivered in a layer and no slicing is available (or could not be done because the slices would not be rectangular), then the toolchain can generate a background layer, flood fill all pixels in the background layer that are not covered by the transparency alpha / the color key in the foreground layer, apply a thinning algorithm (a kind of morphological operation), and vectorize the pixel traces by using a neighborhood search. In dependence on how you do the vectorization, you may need to flatten the sequence of line segments by averaging or a more sophisticated algorithm, just to reduce the number of segments.

### #5137517Branching paths.

Posted by on 09 March 2014 - 06:12 AM

Is there a better way to handle different paths ... other than tons and tons of if statements?

The amount of condition checks is given by the story and amount of branches. In the end you need to check as many conditions as are needed to drive the story. However, the question is how often particular conditions are checked and whether or nor they are checked needlessly.

If you implement the conditionals in a fat hard-coded program structure you are on the wrong way for sure. If you have a graph, where each story fragment is represented by its own node, and each node manages the possible transitions to other fragment nodes, then you are able to identify the current fragment and hence the current story branch and the currently possible transitions by investigating the node object referred to by the current value of the "story advance" pointer.

If you further make the story fragments and transitions abstract enough that they need not be coded individually, you get to the point where the story is data driven. The bunch of if statements, if you wish to say so, is then hidden in interpreting the condition describing data of the current story fragment.

### #5137507Melee weapons in entity component systems

Posted by on 09 March 2014 - 04:53 AM

A full fledged solution to this problem would be to redefine what a weapon is. Looking at a firearm, it's a flying bullet that causes harm. In general, for ranged weapons it's the projectile. And for melee weapons it's a cutting edge or a tip. Hence weapons cause harm only in specific situations, e.g. the bullet must fly, the sword need to swing, etc.

With a DamageSystem in existence, let the entity register a DamageCause component with it. A DamageCause holds a simple geometry, either explicit or implicit, e.g. a straight line in the case of a firearm and a line following the cutting edge in the case of a sword. This simple geometry is given with the weapon's local co-ordinate system as reference, so its "follows" the weapon. A DamageCause has further a damage value, perhaps a damage type (if you want to consider specialized armor), but especially an enabling flag. The DamageSystem will check for collisions only if the enabling flag is set to true.

The mechanisms that set said enabling differs by weapon type. A bullet is usually not visualized / animated because of its high speed. So the player / AI controller is probably the part that enables the DamageCause due to an action command. A sword is harmful only when used as a weapon. Hence the animations showing a sword swing or jab enable the DamageCause (an animation need not be restricted to playing a sequence of pictures; any value can be animated if a belonging animation track exists and the variable of interest is bound to it).

Notice that there is no hindering in adding another DamageCause to a rifle with a geometry along the stock, and enabling this DamageCause by an animation that shows some battering (not sure whether this is the correct word for what I mean) using the rifle. A sword may also have different DamageCause components for different kinds of attacks.

What is left now is a mechanism that handles the load state and so on of firearms. This can be solved to add a second component, e.g. a ChargeComponent (perhaps in derived forms), to model the specific features. It provides specific controls to the player (e.g. reloading), but also need to be considered by the player / AI controller (because of some state dependency).

An entity becomes, well, let's say "a weapon" now by adding a DamageCause. Notice that in principle a vase may become a weapon when used as a missile. The entity may have a dependency on projectiles, and it causes damage only if used as a weapon.

### #5137310OpenGL Matrix issues

Posted by on 08 March 2014 - 07:22 AM

… The function getGLTransform() just get the matrix from camera and transposes it.

That would be wrong in general. You need to invert it (view matrix is inverse camera matrix), because transposing it is sufficient if and only if there is no translation of the camera involved. Maybe you use the transposed matrix here w.r.t. to the following, but that still unburdens you to compute the inverse.

I choose to made it post-multiplication because, for me, it is more intuitive to have the transformations applied from first to last.

Routines like glTranslate, glRotate, glMultMatrix (assuming that they mean the obsoleted standard immediate mode OpenGL routines) have their definition: They create a new transformation matrix and multiply them on the right side of what is currently on the stack. You cannot alter this behavior if you stick with those routines! Hence using this order

```glTranslate( model_position );
glMultMatrix( view_matrix );
```

is wrong! Notice that if you want to reverse the order of transformations, you need to consider that

A * B == ( BT * AT )T

so that both involved matrices need to be transposed and the result is transposed. The gl* transform routines do not create those transposed matrices, so you get something wrong.

Hence: Get rid of the obsolete transform stuff. Use your own matrix library. Stick with a once chose convention (i.e. use row vectors or else column vectors), and change this only when crossing over to OpenGL (and if needed). And foremost, as mhagain has written: Lean how matrices work.

### #5137297Small inventory system Java (Novice)

Posted by on 08 March 2014 - 04:52 AM

The design shown in the OP has several problems. I know that you're still learning, so don't see the following as critique but as tips. I also don't know what language constructs are already in your toolbox, so feel free to skip the one or other tip until being prepared to consider it.

First of, class Item is a factory for items, a container for items, and an item itself. That is a bad idea because it burdens the class with 3 different responsibilities. A principle in OOP is the "one responsibility principle" which means that a class should be written so that it does one thing. This is not a hard requirement, but it is usually a good idea to follow it. So it would be a Good Thing (™) to have classes Item, ItemFactory, and ItemContainer. Notice please that Inventory is also a candidate to be an ItemContainer.

Looking at the factory first, it is notable that there is not exactly one factory as would be expected. Even if you had separated the factory functionality into an ItemFactory, you could still instantiate more than one. This is okay in principle, but you want the factory to create unique identifiers. This can be done only if either there is one and only one factory instance, or else all existing factories are synchronized when generating an item identifier. There is a so-called design pattern with the name "Singleton" (you may ask the Internet for it). Singletons are a bit controversial because of their global nature. Nevertheless, an application wide sole factory is IMHO a valid use case. It works so that you cannot instantiate the class ItemFactory directly, but invoke a static function that returns you ever the same object. This object is generated the very first time of invocation, stored internally, and returned as is for every sub-sequent invocation. This is in fact an OOP approach to a class with all static members, what itself is another possibility to make the factory unique.

The class Item itself should hold the attributes that are common to items. Your Item.itemDB array isn't such a thing. Why should, for example, a knife has contained items? The member itemDB and its companion routines are good for the ItemContainer class but not for the Item class. This doesn't mean that a concrete item is not able to hold other items. You can derive the class ItemContainer from the class Item, so it is itself an item and can contain other items. You can search the Internet for a design pattern named "Composite" if you are interested in the nitty-gritty details.

Your Inventory class suffers from some of the above issues as well. More to this later ...

… I cant seem to figure out how to insert items from "Item array" into "Backpack array" just by using reference ID which is also happens to be array index.

The problem with this kind of index addressing is that all containers (counting the array to it) need to have the same number of slots. I'm using this e.g. for resource handling where a resource (of a specific type) is referred to by the resource library as well as the concrete graphic rendering device. It works because on both sides the number of slots needed by the containers is the same. But in your case it isn't. An inventory is usually much smaller (or else you'll wasting memory) than your main table.

Now, in your current implementation the class Inventory has a member that can refer to a single Item instance, but it has an array for references to Inventory instances. This is spurious. I would expect it to have no reference to other inventories but a container (an array in your case) for items! This is because an inventory should be enabled to hold several items! If done so, and the array is chosen big enough to have a slot for each Item.id that can be occur, you can store items within the inventory using Item.id as index as well as done currently in the Item class.

It seems to be a project problem. An "Item" class should be used to implement a general item, not more than one. If you wanted a different class for swords, you could create another class called "Weapon", which inherits "Item", and another one called "Sword". Then, you could create a class to store ANY type of Item, no matter if they are a sword or a jewel or a coin. I'll give you a little example without being too complex about this.

Although this isn't wrong in principle and actually correct to a degree, it should be mentioned that OOP has a second mechanism besides inheritance, and that is composition. I urge the OP to notice this because inheritance, since being kind of innovation in OOP, is somewhat forced by textbooks and internet sources, so it seems to be the magic weapon. This is not true. Relying only on inheritance does not solve problems but leads to unmaintainable code. This becomes clear only after a while, namely if the codebase begins to grow noticeable. Either you end up in spaghetti inheritance or the so-called "God class".

To explain it a bit further, I exaggerate a bit:

Item <= RightHandUseable <= Weapon <= Cutter <= Sword <= MagicSword <= SwordOfKalmahar <= EnhancedSwordOfKalmahar

Or should it better be

Item <= Weapon <= Cutter <= RightHandUseable <= ...

This can be defused a bit when using mix-in inheritance, i.e. inheriting orthogonal functionality, but that is possible only by inheriting interfaces (in Java) so have to implement them each and every time. Now, what is with a Stick? It is inherited from Tool or, specialized, Lever because it is a tool to activate a mechanism when inserted into a slot, or is it a Weapon because it can be used as a Cudgel?

Doing all this is right is probably too much when still learning the language; but don't get trapped by the ideas coming from a single source. Be open.

### #5136955Entity Component System and Parent Relations

Posted by on 06 March 2014 - 03:10 PM

1) How does your system know what position component to update? In your example, you have a placement component that stores a local position, which is the position relative to the parent's position. Now suppose someone kicks the wheel. How does the physics system know to update the Parenting component's local_placement rather than the Position component's position?

Nope. A Placement component ever stores a global placement (I use the term placement for a combination of position and orientation). When an entity has a Placement component it is placed in the world. Notice that a global placement makes sense by itself but a local placement does not because the latter requires a reference frame. The local placement e.g. within the Parenting component is not a component but a member value. I used a uppercase "Placement" for component (like written as a classname) and a lowercase "placement" otherwise.

So the question is how does the physics system know whether to update the local placement or the global Placement of the wheel. So read on ...

2) How does the rendering system resolve the "final" position of the entity?

It doesn't, because it need not. Notice that the rendering sub-system is the last executed during an iteration of the game loop. When it comes into play all of animation, physics, collision resolution and so on is already done. Hence all Placement components are up-to-date and, because those components ever store the global placement, ready-to-use.

Nevertheless the Placement instances which are controlled in some way need to be updated. Notice that this problem is unrelated to CES. Animation controlled placements are driven when the animation system is updating, of course. Placements that are controlled by what I call mechanisms are updated when the belonging sub-system (named SpatialServices) is updated. You can solve this problem using a dependency graph with or without dirty state propagation, as usual, where the edges of the graph are defined by the components implementing the mechanisms. Physics and collision resolution can work as usual.

3) And lastly, what happens if there are several different types of these parenting-type components (you mention having 12 different ones) on one entity? Or is that just considered a no-no?

It makes no sense to run competing controllers on the same value. Well, this brings up the question what the controlled value is. A placement can be seen as one compound value, as a position and an orientation, or as 3 position scalar channels and 3 orientation scalar channels. The latter one is nowadays typical for 3D content creation packages but probably too fine grained for games. The former one is perhaps too coarse. I used to use the separate position and orientation channels. Hence a controller may control both or one of both channels. For example, the Tracking component naturally controls the orientation only, while an Orbiting controls both channels. If meaningful then the control by a component can be switched on or off. With respect to physics, the controlled channels are constrained.

Additionally, it is valid to have several controlling components if they don't compete because all but one is disabled. For example it is okay to have several Grip components on an entity as long as at most one of them is paired with a Grab.

### #5136762Game Event System

Posted by on 06 March 2014 - 05:35 AM

There are many possibilities to dispatch events, and there are many solutions of how to supply your game sub-systems with informations. Of course, it depends on your overall coding philosophy and the particular purpose you are interested in.

Registering to be notified for each and all events may be as bad as registering for each single kind of event. It depends on how many kinds of events exist, how high the rate of events is, how many listeners there are, how many kinds of events a listener is interested in, how many virtual functions need to be invoked for dispatching an event, how high the administrative effort is to manage listeners / listener plus event filters, and perhaps more. Dispatching inside the listener by using a switch, registering with distinct event queue's, using a filter on events before invoking the listener, … are all valid approaches. Perhaps a mix of several of those is the way to go in the end. However, the article is titled "Simple Event Handling" ;)

In the OP you are speaking of input (and perhaps other OS / windowing system generated evens) as events as well as game related "high level" events (without further specification). It is probably not wise to mix them up. Besides the fact that the solution I'd prefer don't propagates input at all, you may think of translating input into game events in an early stage. E.g. if a handler detects a specific input situation when investigating the current input queue (I mean a self implemented queue here, not the OS provided queue), then it generates a game event accordingly to an input configuration (you know, the player likes to map her owned input devices to game events).

### #5136746Input handling in ECS system

Posted by on 06 March 2014 - 03:58 AM

Using low level sub-systems by high level sub-systems is okay because it is the natural order when layering. Input is a low level system. So I don't see a restriction in using it by e.g. your core game sub-system.

The solution I'm fine with so far is that Input as a sub-system just gathers and maintains low level input and provides a plug-in mechanism to allow other sub-systems to investigate the queued input. A sub-system that is interested in input generates an InputHandler. An InputHandler is responsible to detect situations from queued input and usually to translate it into commands (as a form of high level input) accordingly to the input configuration (i.e. a mapping that defines which input situation should generate which command).

At least higher level sub-systems need to be processed in a defined order anyway. I do not propagate input by messaging but let the sub-systems run their InputHandler as needed inside their respective update method.

PARTNERS