Jump to content

  • Log In with Google      Sign In   
  • Create Account


We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.

Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Member Since 10 Oct 2005
Offline Last Active Today, 01:23 PM

#5187614 Planning a Complex Narrative

Posted by haegarr on 17 October 2014 - 03:13 AM

Dialogue trees are closest to what I'm considering, but I'm not sure I'll go with the traditional method used. Instead I've worked out a sort of 'binary' system for ease of response, yes or no, positive, negative or neutral, agree or disagree, etc. Responses for quick interaction while preserving immersion; rather than reading through an extensive amount of text to select something closest to what you want. Which also permits ease of calculating an NPC's admiration or aversion towards the player.

During investigating the internet for meanings about in-game dialog structures, I came to a similar conclusion regarding to phrases to be said by the player.

* They should not be (too) verbose, especially but not exclusively if voice acting is in play. This allows to pick a phrase but being still interested in reading / hearing the verbose one.

* They should be marked regarding to their effect on the interlocutor, so players not speaking the game's language natively need not understand the nuances in sentences.

* The choices should be limited to a few (perhaps at most 5 or so).

* The game settings may provide to switch on/off a kind of help for conversation in that choices are sorted regarding how good they fit the player character or the story.


However, such things are controversial anyway...

#5186459 Recommend a book for algorithmic 3D modelling theory

Posted by haegarr on 12 October 2014 - 05:10 AM

AFAIK (but maybe I'm not correctly informed) ...


Methods like beveling, union, and such are collected under the term "constructive solid geometry" or CSG for short. This can be counted to algorithmic modeling in that is provides useful tools.


However, algorithmic (or procedural) modeling by itself is usually understood so that geometry is made out of a set of rules. This is often applied to architecture or plants. For example how to automatically place windows and doors into house facades, how to place houses and streets to get a city, how to make the branching of trees, how to place petals, how to generate a terrain, and so on. The classic books handling these aspects are

* Texturing & Modeling - A Procedural Approach

* The Algorithmic Beauty of Plants

#5185091 Precision on StateMachines

Posted by haegarr on 05 October 2014 - 03:29 AM

(as described in http://www.gamedev.net/page/resources/_/technical/game-programming/your-first-step-to-game-development-starts-here-r2976)

I'm not sure where in the cited article a state machine is described. So I assume you are speaking of explicitly modeling a state machine in code. Here are some thoughts:


1.) Parallel state machines and hierarchical state machines are invented after noticing that a single state machine is not suitable to handle more complex situations well. In this sense it is not really clumsy. However, I'd not categorize your current need as complex situation.


2.) If the amount of belonging states is low, in this case the amount of states denoting moving, then the "combinatorial explosion" (that originally led to hierarchical state machines) can be accepted. I.e. all the states { move_forward, move_backward, ..., attack_forward, attack_backward, ... } may be generated, so that attacking states have the following state implicitly implemented. This is a clean solution from the point of view of a standard state machine, but obviously not the most popular one w.r.t. developers ;)


3.) Activation and de-activtion of states can be explicit, i.e. there are State::enter and State::exit methods which are invoked on the next current and the now obsolete current states, resp. When the invocation passes the other state as argument, the next current state can store it for later use. This would implement a dynamic transition mechanism.


4.) A more general approach would be to have a state stack where the current state can look up a kind of history of state invocations. This also would allow for a dynamic transition mechanism.


5.) The state machine need not necessarily be explicit. In fact a computer works as a state machine anyway. The set of current values of every variable in your game can be understood as the current state, and changing the values can be understood as transition. For example, a variable action may have the value 0 for standing, 1 for moving, 2 for attacking. Another variable direction may have the values 0 to 3 for forward, left, right, backward, resp. The values of action and direction then make the state. As you can see, at any time they have one of 12 possible values.

#5184736 json c++

Posted by haegarr on 03 October 2014 - 03:16 AM

Though, I seriously have the damnedest time reading documentation. I guess I can understand what any particular function is kind of doing, but I seriously have a difficult time figuring out how it all fits together. Maybe that just comes with time, but right now, for me, it's like trying to understand how to speak a language just by reading a dictionary. Trial and error pretty much ended up winning out today, over actually understanding the documentation

This is a reason why tutorials exist. A (good) tutorial explains how to use what in which situations, while a documentation enumerates all possibilities without regarding use cases much. Tutorials are much more suitable for learning, and documentation is good for looking up details or things learned earlier but forgotten for now.

#5184523 json c++

Posted by haegarr on 02 October 2014 - 03:07 AM

That's the nice thing about abstraction. A stream is just a source of data until it says it has no more data. Whether the data is coming from a file, the network or some other arcane source is irrelevant for the user of the stream.

Yep, that's the consequence from the "generic reading" mentioned above :) Although many libraries nevertheless offer various methods of input management; e.g. the FreeType library has.


Well, just for clarification for the OP: There are also caveats to be considered: The library must not be allowed to read behind the logical end of data. If a stand-alone file is opened and wrapped by a stream then there is a natural end. Giving the library a stream on a network socket, a generous memory block or a package file may allow the library to read more bytes than intended for its purpose. This should be considered, e.g. by using an appropriate stream (if exists) or implementing a wrapping stream that reports an EOF if the logical end of data is reached. Similarly, if seeking is supported, the outermost stream may have to handle an appropriate offset.

#5184510 json c++

Posted by haegarr on 02 October 2014 - 01:32 AM

... However, it's only reading from a string created in the code. Is there supposed to be a way to load directly from a file in the directory? I was about to try to just load a string from a file using fstream and then parse it, but I thought there's perhaps a cleaner way (hopefully within the jsoncpp code) to do this.

Directly loading files is not necessarily the "cleaner way". It requires dealing with the differences in file path syntaxes (because many libraries are multi-platform), and it requires the source to be available as a stand-alone file, of course. Think of reading from a network source, or reading of file fragments embedded in an own file format. This is not that unusual. Games often use package files to reduce the amount of single files for performance and/or maintenance purposes. Or the embedding of preview images or the color profiles in e.g. PSD files. The library should also work if the stuff is already in memory (perhaps already loaded as a blob from a package file, or received from a network socket). Maybe it should also work with fragments only (similar to XML fragments) although encoding information are not available from the fragment any more.


IMHO, the cleaner way is to give a library a generic way for reading content, and do eventual file handling (locating, opening, closing) and loading externally to that.

#5183080 Creating a 3D Game Engine [Level Editor]

Posted by haegarr on 26 September 2014 - 02:14 AM

If you can think of anything else that would be essential for a level editor please add it.

A level editor is a tool used to assemble a game level. That is more than just placing geometry. Things like

  * support for game objects (CES)

  * bounding volumes

  * support for path-finding

  * baked lighting

  * attaching shader scripts

  * triggers

  * texturing

  * texture packing

  * sound & music

  * regions & portals

  * rigging

  * animation clips

  * animation trees & blending

  * NPC behavior

  * scripting

  * ...

  * and not to forget writing level files

come to mind.


So, what is essential? All that is needed for the type of game and level. Some stuff may be readily imported, and perhaps you have some tool chain that generate other stuff. I don't know; you told us not enough details.

#5182450 How to transfer/manage objects from one class to another

Posted by haegarr on 23 September 2014 - 12:05 PM


For instance: Should the coordinates x and y and a (rotation) not all be replaced by a matrix by itself. But then an object has two transformation => one for it's coordinates and one for it's actual placement in the world/parent...

The homogenous co-ordinate is an extension, so to say, that allows to express a translation by a matrix multiplication. Without that, a translation would be expressed by an addition.The advantage of having it as multiplication is that you can compute a single matrix for any combination of translations, rotations, and scaling.


What I call a placement is in fact a position and an orientation of an object relative to its super-ordinated object (I'm used to ignore scaling for a placement). A global placement is for an object in the world, and a local placement for a child object relative to its parent object. When you apply such a transform matrix you actually multiply geometry (vertex positions, normals, tangents) with that matrix. It is interpreted as "the vertex position / normal / tangent is given in model local space, but I want it in the global / parent space, hence I multiply with the respective transform matrix". Mathematically it plays no role what position you transform; so instead of using a vertex position you can use the point (0,0,1); remember that (a) we use homogenous co-ordinates, hence the "1", and that we are in model local space. So (0,0,1) is actually the local origin of the model. And when we transform the origin to the global / parent space, we actually have computed the position of the model in the global / parent space. Therefore we have 1 transform for both its geometry and its placement!


An example: The position of the model in the world should be (x,y,1) and its rotation should be the identity:

    M := R(0) * T(x,y) = I * T(x,y) = T(x,y)

The identity matrix (those where all elements are 0 but the main diagonal elements are 1) has no effect. Notice that I'm using row vectors here, so that the common order "geomtry is rotated in place, and the rotated geometry is translated" is written from left to right in the formula. Now, using this to transform the position (0,0,1) you get

    (0,0,1) * M = (0,0,1) * T(x,y) = (0*1+0*0+1*x,0*0+0*1+1*y,0*0+0*0+1*1) = (x,y,1)

the said model origin in global / parent space. Try it on paper; it works. :)


Wat exactly gets stored with a Placement object is a question of practice. It is convenient to store the affine transform matrix, of course. It may also be okay to store other parameters, e.g. a position and an angle, so that the matrix is computed from those parameters.


So you have an extra "coupling" object that has two parameters => the parent parameter and the child parameter. And you continuously concat the transformation of your child with the coordinates of your parent. 
Here again i think i don't really understand it all the way, because it would make more sense if the coordinates of the parent would be in a transformation matrix. In that way you can just set the transformation with the coordinate-transformation of the parent and your done.

All what is said above is only a prelude for parenting. As said, a placement defines a single spatial relation. For parenting, we have a defined spatial relation of the child w.r.t. its parent, say LC, and we want to compute the spatial relation of the child w.r.t. the world (for rendering purposes, for example), say MC. As we know, applying a transform matrix brings us from the local to the parent space. Here we want to go from the modal local to the parent space and further to the world space. So we do this in two steps and get a combined transform matrix

   MC := LC * MP

where MP is the transform matrix of the parent.


Doing so means that the Parenting object has the following parameters:

a) A reference to the model's global Placement, perhaps indirectly by a reference to the model itself; within this Placement the matrix MC is stored.

b) A reference to the parent object's global Placement, perhaps indirectly by a reference to the parent itself; within this Placement the matrix MP is stored.

c) The model's local Placement where the matrix LC is stored.


When the Parenting is called to update, it requests MP from the parent's Placement, requests LC from the own parameters, computes MC, and stores the result in the model's global Placement. Voila.


If the latter is correct, do you only have to call this extra object once  at initialization? Or even further: is this object really necessary? Can't i just set the parents transformation at the child's construction?

This extra object need to called whenever

a) the parent's global Placement matrix has changed, or

b) he model's local Placement matrix has changed.

c) and you need to access the current global Placement matrix of the child.


It is part of the update mechanism inside a game loop similar to (but not exactly the same as) those of, say, the animation sub-system.

#5182061 simple crafting system

Posted by haegarr on 22 September 2014 - 04:58 AM

SyncViews has mentioned many of the design flaws already. Here re some more:


1.) public bool ItemRecipe.CanCraft


a) ... should not be publicly writeable because there is no sense in setting it from outside the class, BUT ...


b) ... is not meaningful at all because for every change in the inventory the CanCraft may become obsolete


2.) public bool ItemRecipe.CheckForItemsNeeded(Player player)


a) ... needs an Inventory object but gets a Player object; that restricts flexibility and burdens with unneeded knowledge of what Player is


b) ... doesn't compute what you want (SyncView has mentioned it).


3.) public class HammerRecipe : ItemRecipe


a) Seconding SyncViews: Do this in a data driven way, not by inheritance! Even better: Do this in a data driven way, NOT by inheritance!! ;)


Be sure not to "overcode" things. You don't always need an elaborate class or several functions to handle something that could, technically, be stored in a handful of variables.

While this is true in principle ... Don't mix the representation with the data model and/or the mechanics. It is not over-engineering if data model, logic, and presentation are separated.

#5181464 Texture dimension vs file size performance

Posted by haegarr on 19 September 2014 - 12:48 AM

IIRC then ETC1 is the only format supported on all Android GLES 2.0 devices. PVRTC is supported by PowerVR GPUs only, S3TC is supported on NVidia GPUs, and ATITC is supported on Qualcomm GPUs.


So another possibility is to support the latter 3 formats (for textures with alpha) and to decide at start-up time based on the GL extension strings which files to load later on. This way makes the package bigger but has no negative consequences for the runtime.

#5181230 How to transfer/manage objects from one class to another

Posted by haegarr on 18 September 2014 - 04:27 AM

As ever: We're discussing possibilities here. You need to decide what is helpful for your game and what can be ignored for now...


Like I understand it: The machine is the parent and the zone is the child. So the machine does the rendering of the zone (for example sake the zone is a visible component).
But as you said earlier, an object doesn't do it's own rendering, so does the machine add it's sprite together with the zones' sprite and give it to the renderer as a 'spriteset' or so?

The mechanism of parenting does nothing more than expressing a spatial placement relative to another one. This is outside of rendering, collision detection, or anything similar. Here is why and how:


In 2D a placement consists of a 2D position and an orientation angle. Those parameters can be used to compute a 3x3 homogenous matrix expressing the transform of the sprite. The transform defines what to do to come from the local space (also called model space) to the parent space. I'm used to define that every model / sprite in the scene is placed globally initially, so that the transform matrix inherent to the model / sprite is a world transform matrix. In other words, each model / sprite has a world placement.


In the case that a model / sprite is needed to be parented, I'm used to add an explicit tool for this: The Parenting component. The attachment of a Parenting makes the model / sprite the child. The Parenting includes a reference to another model / sprite, so that those other model / sprite becomes the parent of the child. The Parenting further includes a local Placement, i.e. a Placement that expresses the spatial relation to the referenced parent model / sprite.


In fact the Parent component introduces a constraint on the world Placement of the child model / sprite in that its world Placement is computed as said concatenation of the local Placement with the world Placement of the parent. This is a fancy description essentially of a matrix product.


So ... what you can see here is that the above solution does an external coupling (external to the models / sprites, because based on the introduced Parenting class) of models / sprites on a spatial level. The coupling updates the world transform of what is made the child model / sprite. This means that if all is up-to-date, every model / sprite has a valid world transform in its Placement, regardless whether it is a static one or computed by a Parenting (or computed by another mechanism; I've something like a dozen or so, including animation, of course).


Now, when the collision detection or rendering comes to work, there is a couple of models / sprites each with a valid Placement in the world. That is all what the respective sub-system needs to know. "Parenting? What's that?" is said by the renderer.


As we are talking about doing the rendering outside the object, how does this work? Does the renderer iterate over all the objects, take their sprites and draw them on the coordinates+rotation of the object?

Yes. The renderer iterates the scene. Maybe there is a subset of scene objects that denotes all "drawables" or so. However, the renderer finds the objects, and uses the sprite as visual representation and the (as we know being valid) world placement's transform matrix. It usually does frustum / viewport culling with the help of some bounding box first.

#5180955 How to transfer/manage objects from one class to another

Posted by haegarr on 17 September 2014 - 02:57 AM

I would have two variables in my machine class:
Public Zone pickZone;
Public Zone dropZone;
Which are references to the zone objects. They are initialized when a machine is created.

This is a valid way. It does not differ significantly from using the reference Zone.machine as mentioned already. Merely the question whether the members should be public may arise.


Let's say the selling of a machine is done outside the machine class, how would you find out which zones to destroy together with the machine if you don't have these two variables?

Well, it isn't so that no possibilities exist:


a.) Indirect coupling: Classes / objects in programming can represent data about anything, e.g. also a concept or an idea or, in this case, a relation. For example, a class ProductionUnit can be used to couple a Machine and two Zone objects together.


b.) Reverse coupling: A simple iteration over all Zone objects can be done to look up for all zones docked to the machine in question. (This option becomes more and more unattractive the more zones there are and the more sales are done per time unit, of course. If you think "what the hell should this be good for?": Relational databases use this way to express one-to-many relations. To overcome the efficiency problem, the related rows are indexed. In Java something similar could be yielded in.)


However, I don't say that you should use any of the above possibilities. Using direct coupling is fine in principal. The critique I've made was with respect to the interaction between objects. E.g.: How many classes need to be adapted if you want to introduce a new kind of zone? Or does a machine need to know what a sale is? Just such things, you know.


I'm not familiar with this. I've been trying to Google it, but for now only results about divorcing parents :-) I keep looking! (If anyone has good sources about that you may always let me know) .

What I mean is a relation that is commonly used in rendering graphics: Placing a graphical representation (often called the "child") not directly into the world but relative to a reference object (often called the "parent"). For the actual rendering the world placement of the child is then computed by concatenation of its local placement with the world placement of the parent.


While "the pickZone A is docked to the machine B", perhaps expressed by a reference like

   A.machine = B;

denotes a logical relation, a parenting in the above sense means a spatial relation in that it defines e.g. on what side and how narrow the graphical representation of the zone is attached to the graphical representation of the machine. So, say when you rotate the machine then the zone will rotate with it.

#5180760 How to transfer/manage objects from one class to another

Posted by haegarr on 16 September 2014 - 11:26 AM

One thing though: should the machine and the area-objects not stay connected?
For instance let's say one of the machines is mobile and it moved around. The areas have to move along, or the player sells a machine, the areas have to be destroyed. It seems like if I write all that outside the machine class it's gonna get messy!

What do you mean with "connected"? I think it is better when zones and machines are distinct classes. This allows a different hierarchy for zones and for machines. They can logically be connected (see the members DropZone::machine and PickZone::machine in the code examples above), and they can be connected spatially if wanted (by using a 2D or 3D parenting placement mechanism).


Selling a machine is a procedure outside of the scope of the machine anyway. Relocating a machine it is, too. I don't see a problem ATM. However, I also don't have an overview belonging to your game design, so may be I'm too rigorous.


About the products: Is it better to let them only exist in the player/machine, or should I also put them in a collection of some kind. I'm not sure what is best. I'm leaning more to the first option, because it is less overhead and simpler, but then what to do when the player drops it's product, it should still exist on it's own.

Player avatar and machines are containers for products. Internally they use a collection if necessary. A more global collection of all products may be useful for the game mechanics,  but from what you've written so far I don't see a reason for that.


If a player drops a product it is no longer contained but an objects in the world, e.g. like a machine or something. It may become a bounding volume and collision handler, whatever fits the game design.

#5180496 Matrix Decomposition

Posted by haegarr on 15 September 2014 - 11:25 AM

The problem is that the draw call is done with a common matrix. Any suggestions? Any changes on the vertices are possible?

I don't know whether you can change the vertices directly, because you just told nothing about how the vertices are passed.


If you have access to them then of course you can use the CPU to do the job. For example, if all sprites are given by quads then each 4-tuple of vertices are to be used. Or if sprites are passed as 2 triangles, then each 6-tuple is to be used. Or if the vertices are passed by index, then the same goes with indices.


You can then compute the center of each sprite by summing the N vertex positions and divide by N. Then subtract those center position from each of the N vertex positions, half the resulting vector, and re-add the center. E.g.

for each sprite
   vec2 center(0, 0);
   for each of the N vertices of the sprite do
      center += vertex.pos;
   center /= N;
   for each of the N vertices of the sprite do
      vertex.pos = ( vertex.pos + center ) * 0.5;

#5180485 Matrix Decomposition

Posted by haegarr on 15 September 2014 - 10:54 AM

I guess that math-wise, you need to translate to each center and do the scale for each element of the group?

Yes: If p is the position of the center, then translate by -p so that the center comes to rest at 0, do the scaling, then translate back by +p.


Is the drawcall done with an own matrix for each sprite, or with a common matrix for all sprites?