Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Online Last Active Today, 05:59 AM

#5189364 OpenGL/GLSL version handling

Posted by haegarr on 27 October 2014 - 03:36 AM

Anyhow, I'm curious to know everyone else's approach to compatibility, as this is rather a new realm for me.

I use a 2 step mechanism, a coarse one bases on dynamic library selection, and a fine one based on late binding. This works because of a layered solution of the graphics sub-system.

 

First of, somewhere in the starting process of the application, a OpenGL context is created with the highest supported version. If this fails then a context with the next lower version is tried until the lowest supported version. This can be done in a comfortable way on Mac OS, because it gives one the highest available version when asked for a 3.2 core profile anyway. On Windows, the process is less comfortable. However, if a context was generated, the belonging dynamic library is loaded. If this fails, then again the next lower version for context creation is tried. In the end I hope to have a library successfully loaded, or else the application is not runnable, of course. 

 

Such a library brings in the lowest layer of the graphics sub-system w.r.t. the own implementation. It is able to process graphics jobs that are generated within the middle layer of the graphics sub-system. However, when a library was loaded successfully it has a defined minimum of features. Additional features or better implementations are then recognized by the set-up routine of the graphics sub-system layer loaded with the library. This is based on investigating the minor version number and the extensions. The layer then does symbol binding by its own.

 

Regarding shader scripts: Some scripts are assembled at the runtime, and this considers to use the mechanisms available from the loaded library. Pre-made scripts need to be available for each loadable library, of course, and hence are simply selected.

 

 

EDIT: BTW: The above mechanism is not only used for version compatibility but is abstract enough to also select between OpenGL and D3D.




#5188873 Game State management in Entity Component System architecture

Posted by haegarr on 24 October 2014 - 02:02 AM

ECS gives me migraines.
 
At first it sounds like a great idea, but you soon end up in a situation were a tiny change in the game can have a massive effect on performance.

Well, the performance hit can happen in any other architecture, too. 

 

It really depends on your definition of an "Entity".

 
Consider the full on approach where you have an entity that for everything and game objects are bags of entities. So a character running around the game would have ...
 
SkinnedMeshEntity, WeaponEntity, HealthEntity, BackpackEntity, etc.   Each of those would have it's own bag of entities. (TransformEntity, CollisionBoxEntity, ......)

ECS stands for Entity Component System, although IMHO "component based entity system" would be better. A "game object" is often used as a synonym for entity.

 

With this in mind ...

* SkinnedMesh is a component,

* Weapon is probably an entity by its own rights (because it should probably be exchangeable),

* Health is a component, or perhaps better a variable of a Constitution component,

* Backpack is perhaps an entity with an Inventory component (if used as an item by its own), or perhaps only an Inventory component.

 

Other things like e.g. terrain are (should be, at least ;)) never an entity, so "the full approach" is wrong from the start. However, what's wrong with a weapon, a clip of ammo, or a health pack (not the Health constitution!) being own entities? What's wrong with each of them having its own placement, its own mesh, and other components? Nothing, because your game needs this! You want the weapon to lie around on the ground, ready to be picked up. So it is placed in the world. The weapon has its own mesh clearly.

 

So suddenly this one game object contains hundreds of entities. Nasty but workable. Things get really nasty when you start needing references between entities.

Well, I think dozens of components would already be more than sufficient. However, the aspect you are indicating here, if I understand you correctly, is that parts that were usually embedded as simple variables, now become their own software objects. The typical example is the placement, since close to everything is placed somewhere in the game world and hence needs a placement. Point is that "close to everything" is not "everything", and with a reasonable amount of static geometry it may be even wrong at all as an argument. However, even if I accept that a placement is needed by close to everything, it is just a single isolated example in front of a much greater couple of counter examples. And of course you are not hindered in the end to embed some "very important components". E.g. Unity3D does so with the placement, too, although I do not agree with this approach ;)

 

The weapon needs bullets, so it has a reference to a BulletMagazineEntity. A skinned mesh needs an animation entity. A character needs a target to shoot at, which is an entity.... you see were this is going?

Again, use the solution that is suitable.

 

* A firearm weapon needs ammo, it does not need a BulletMagazine per se. After picking up a ammo clip, the ammo counter could simply be increased and the clip entity could be wasted.

 

* There is no technical / architectural reason why a skinned mesh (a component) needs an animation (a component). It is a decision of the designer. S/he could create a static skinned mesh entity. Also, it's the entity that needs both the skinned mesh and the animation (as said, by design).

 

* A character needs a target to shoot at, which is an entity ... well, what should it be otherwise? I go even further: I say that shooting is an action that checks against a BoundingVolume component of an entity, and if the hit test passes, it causes a defined decrement of a variable called "Health", to be found in the Constitution component of the targeted entity. There is no Constitution / Health? Then shooting to death is not possible. There is no CollisionVolume? Then aiming at the entity is not meaningful, perhaps not even possible. (One can see from this example, that components can be used also to tag entities for game related purposes. You can even mark a BoundingVolume to be used especially for shooting, for example.)

 

It gets even worse if you have multiple entites with a reference to the same entity. Null pointers can really ruin your day.
This happens regardless of the architecture, because it is a much more lower level problem. Even an unresolvable relation in a game item database has this effect.
 

...

 

Well, the thing is that the representation of problems comes from the game design. The software architecture should be powerful enough to solve the problems, and this not only acutely but also in the long term

 

As with everything, the programmer can make mistakes, regardless of whether ECS or not is in use. As with everything, also ECS is not the silver bullet. But for a common problem in game development, it is a better approach than others are. As said, this doesn't mean that every occurring problem should be mangled into the ECS; there are parts that do not fit.

 

 

Just my 2 Cents. Perhaps also a bit biased ;)

 

 

EDIT: Oh, forgotten to say: It is true that properly implementing an ECS is its own can of worms. There are many aspects to consider, many routines to go, and going the wrong route will have negative impacts like performance lost. It implies a great hurdle. But that doesn't mean that the architecture is bad in itself. Instead it means that there are good and bad approaches.




#5188771 Calculating Right Vector from Two Points

Posted by haegarr on 23 October 2014 - 11:42 AM

Let the spline be divided in segments, so that a sequence of positions

   { p0, p1, p2, ... , pn }

is given. The difference vector 

   di := pi - pi-1, 0 < i <= n

"ties" 2 consecutive points. Its projection onto the ground plane (assuming this is the x-y plane) is

   d'i := ( di;x, di;y, 0 )

and the belonging sideway vector, i.e. one of its both perpendicular vectors in the plane, normalized, is

   si := ( di;y, -di;x, 0 ) / | d'i |

 

This could be used to calculate the 4 corner points of a quad with width w for the segment i as

   pi-1, pi, pi + si * w, pi-1 + si * w

 

Although each segment gets the same width w, the result looks bad because of the said gaps and overlaps. This is because at any intermediate pi there are 2 sideway vectors si and si+1, and they are normally not identical. 

 

Now, a better crossing point would be located somewhere on the halfway vector

   hi := si + si+1

which is in general no longer perpendicular to one of the both neighboring segments, so that it cannot be simply scaled like

   hi / | hi | * w

in order to yield in a constant width of the mesh.

 

Instead, using a scaling like so

   vi := hi * w / ( 1 + si . si+1 )

does the trick (if I've done it correctly). It computes to a vector with exemplary lengths

   | vi | = w

   | vi |90° = 1.414 w

   | vi |-9 = 1.414 w

   | vi |45° = 1,08 w

what seems me okay.

 

Then the quad for segment 0 has the corners

   p0p1p1 + v1p0 + s1 * w

and intermediate segment's quad has the corners

   pi-1pipi + vipi-1 + vi-1

and the quad for segment n has the corners

   pn-1pnpn + sn * w, pn-1 + vn-1

 
Well, I hope I made no mistake. Please check twice ;)
 
However, I'd consider to use the spline as middle of the river instead of an edge.



#5188695 Calculating Right Vector from Two Points

Posted by haegarr on 23 October 2014 - 01:20 AM

I'm not sure what your problem exactly is. So I describe the entire process in a more or less coarse manner, and we can go into details at the step identified by you.

 

1.) Sub-divide the spline path into linear segments.

 

2.) Compute the (perpendicular) sideway vectors at the beginning and end of each segment.

 

3.) Compute a common vector for each pair of sideway vectors from the end of one segment and the beginning of the next segment. This step is used to avoid a gap on an outside bend or an overlapping on an inside bend, resp.

 

4.) Compute a sideway displacement of the line segment using a defined (half) width, so that a closed mesh is yielded in.




#5187639 "Bind" Entities to Entity

Posted by haegarr on 17 October 2014 - 07:40 AM

"Usually" doesn't mean "ever"... and there are many ways to do things. Say, what is a concrete example of your 10 meshes from the same model?

 

Look at a car with a body and 4 wheels. The 4 wheels each need their own placement because the combination of position and orientation is unique for each one. Their rendering material is different from those of the car body. You can now say that the entire thing is an entity, but then you need to have a sub-level of some kind of entities, and you need to express the relation. The car body has a mesh with, say, 6 sub-meshes, one for the tin parts and 4 for the windows. Each wheel has a mesh with 2 sub-meshes, one for the rim and one for the tyre. Each wheel has a global placement controlled by a parenting with the car body entity as parent (although I'm using a Chassis4Wheel component which not only provides 4 slots at once but also some physical simulation parameters).

 

When you look at the problem, there is no real reason why the wheels cannot be entities side by side to the car body entity, which are nevertheless related to the car body entity by parenting. Each entity by itself is complete and can be described by the entity/component concept. You are used to think of a car as one entity, but in fact it is just an assembly of many parts, each one existing by itself, and a broken wheel hub (the physical equivalent of the parenting) will separate a wheel from the car.

 

On the other hand, an antenna which will never be detached from the car (or spaceship) and has ever the same placement can easily be implemented as a sub-mesh.




#5187622 "Bind" Entities to Entity

Posted by haegarr on 17 October 2014 - 04:21 AM


But what if I have, for ex., a space ship consists from a 1000 meshes?
How to manage such situation? Need I create something like a mesh class with meshes list inside?

An entity usually has one mesh, but a mesh often need to be build by sub-meshes. Actually vertices are hold in sub-meshes, not in meshes. However, this is done to be able to attach different materials to different parts of the mesh. The placement of the entity in the world is used for the entire mesh.

 

If you have the need to attach meshes to other meshes at runtime, like e.g. the crew members and the spaceship, sub-meshes are obviously not the correct solution. Instead, you create a spatial relation between a crew member and the spaceship. This kind of relation is usually called "parenting" and implements a forward kinematic (but notice that other possibilities exist as well). Notice that parenting a crew member to the spaceship is done at runtime; the crew member may leave the spaceship at some time and hence the parenting will be destroyed. Other parentings may exist for the entire runtime of the game.

 

To express parenting you can use a tree hierarchy, i.e. you can have a child-entity list inside an entity. IMHO this isn't the best solution. I prefer to express relations between entities explicitly, i.e. to instantiate a Parenting object that will further be used to calculate the global placement of the "child" object from its local placement and the global placement of the linked parented object. So the instance of a Parenting object expresses the current existence of a relation, its kind, and the necessary parameters (the local placement and the link to the parent entity, in this case).

 


As I understood, I must get the entity`s children and call the transform on it recursively?

The important aspect is that things fetched from elsewhere are ready-to-use when being requested. One possibility is recursive calling. For example the global placement of a parented entity should be calculated. One term of the calculation is the global placement of the parent entity. When those placement is requested, the method first checks whether the placement is up-to-date. If not, it first calculates the new placement, perhaps again by requesting another placement. However, it returns the own global placement if and only if it is ready-to-use. The other solution to the problem is a so-called dependency graph. It is a structure (a sorted list is sufficient) where dependent objects occur logically behind objects they depend on. So when updating that structure it is guaranteed that dependencies are up-to-date simply because they are processed before the dependent objects.

 


The main thing I cannot understand is that all components are present in global list separately from entitty itself, so, in that case I could do transforms for the same entites few or more times and it will break everything.

Looking at the description of parenting entities above, what we have is the following:

 

* A collection of meshes, each one with a collection of sub-meshes.

* A collection of global placements, so that exactly one can be associated with each mesh.

* A collection of relation objects, e.g. Parenting instances. Each one means that the targeted global placement is dynamically computed.

 

Now think that during running the game loop one part is responsible to update the global placements (in fact there are more than one part, e.g. animation, physics, mechanisms like parenting, collision correction). This part iterates the collection of Parenting objects which may be given in dependency order. The update just needs to access a global placement and the Parenting code (which itself accesses the local parameters, of course). No need to access the mesh or whatever else makes the entity in its entirety. When later in the game loop the graphical rendering is processed, it just accesses the global placement and the mesh. There is no need to known about Parenting or such.

 

As you can see from this example, distinct stages during processing need access to only a particular subset of all of the entity components currently existing in the scene. Having such subsets already separated at hand is fine.




#5187614 Planning a Complex Narrative

Posted by haegarr on 17 October 2014 - 03:13 AM


Dialogue trees are closest to what I'm considering, but I'm not sure I'll go with the traditional method used. Instead I've worked out a sort of 'binary' system for ease of response, yes or no, positive, negative or neutral, agree or disagree, etc. Responses for quick interaction while preserving immersion; rather than reading through an extensive amount of text to select something closest to what you want. Which also permits ease of calculating an NPC's admiration or aversion towards the player.

During investigating the internet for meanings about in-game dialog structures, I came to a similar conclusion regarding to phrases to be said by the player.

* They should not be (too) verbose, especially but not exclusively if voice acting is in play. This allows to pick a phrase but being still interested in reading / hearing the verbose one.

* They should be marked regarding to their effect on the interlocutor, so players not speaking the game's language natively need not understand the nuances in sentences.

* The choices should be limited to a few (perhaps at most 5 or so).

* The game settings may provide to switch on/off a kind of help for conversation in that choices are sorted regarding how good they fit the player character or the story.

 

However, such things are controversial anyway...




#5186459 Recommend a book for algorithmic 3D modelling theory

Posted by haegarr on 12 October 2014 - 05:10 AM

AFAIK (but maybe I'm not correctly informed) ...

 

Methods like beveling, union, and such are collected under the term "constructive solid geometry" or CSG for short. This can be counted to algorithmic modeling in that is provides useful tools.

 

However, algorithmic (or procedural) modeling by itself is usually understood so that geometry is made out of a set of rules. This is often applied to architecture or plants. For example how to automatically place windows and doors into house facades, how to place houses and streets to get a city, how to make the branching of trees, how to place petals, how to generate a terrain, and so on. The classic books handling these aspects are

* Texturing & Modeling - A Procedural Approach

* The Algorithmic Beauty of Plants




#5185091 Precision on StateMachines

Posted by haegarr on 05 October 2014 - 03:29 AM


(as described in http://www.gamedev.net/page/resources/_/technical/game-programming/your-first-step-to-game-development-starts-here-r2976)

I'm not sure where in the cited article a state machine is described. So I assume you are speaking of explicitly modeling a state machine in code. Here are some thoughts:

 

1.) Parallel state machines and hierarchical state machines are invented after noticing that a single state machine is not suitable to handle more complex situations well. In this sense it is not really clumsy. However, I'd not categorize your current need as complex situation.

 

2.) If the amount of belonging states is low, in this case the amount of states denoting moving, then the "combinatorial explosion" (that originally led to hierarchical state machines) can be accepted. I.e. all the states { move_forward, move_backward, ..., attack_forward, attack_backward, ... } may be generated, so that attacking states have the following state implicitly implemented. This is a clean solution from the point of view of a standard state machine, but obviously not the most popular one w.r.t. developers ;)

 

3.) Activation and de-activtion of states can be explicit, i.e. there are State::enter and State::exit methods which are invoked on the next current and the now obsolete current states, resp. When the invocation passes the other state as argument, the next current state can store it for later use. This would implement a dynamic transition mechanism.

 

4.) A more general approach would be to have a state stack where the current state can look up a kind of history of state invocations. This also would allow for a dynamic transition mechanism.

 

5.) The state machine need not necessarily be explicit. In fact a computer works as a state machine anyway. The set of current values of every variable in your game can be understood as the current state, and changing the values can be understood as transition. For example, a variable action may have the value 0 for standing, 1 for moving, 2 for attacking. Another variable direction may have the values 0 to 3 for forward, left, right, backward, resp. The values of action and direction then make the state. As you can see, at any time they have one of 12 possible values.




#5184736 json c++

Posted by haegarr on 03 October 2014 - 03:16 AM


Though, I seriously have the damnedest time reading documentation. I guess I can understand what any particular function is kind of doing, but I seriously have a difficult time figuring out how it all fits together. Maybe that just comes with time, but right now, for me, it's like trying to understand how to speak a language just by reading a dictionary. Trial and error pretty much ended up winning out today, over actually understanding the documentation

This is a reason why tutorials exist. A (good) tutorial explains how to use what in which situations, while a documentation enumerates all possibilities without regarding use cases much. Tutorials are much more suitable for learning, and documentation is good for looking up details or things learned earlier but forgotten for now.




#5184523 json c++

Posted by haegarr on 02 October 2014 - 03:07 AM


That's the nice thing about abstraction. A stream is just a source of data until it says it has no more data. Whether the data is coming from a file, the network or some other arcane source is irrelevant for the user of the stream.

Yep, that's the consequence from the "generic reading" mentioned above :) Although many libraries nevertheless offer various methods of input management; e.g. the FreeType library has.

 

Well, just for clarification for the OP: There are also caveats to be considered: The library must not be allowed to read behind the logical end of data. If a stand-alone file is opened and wrapped by a stream then there is a natural end. Giving the library a stream on a network socket, a generous memory block or a package file may allow the library to read more bytes than intended for its purpose. This should be considered, e.g. by using an appropriate stream (if exists) or implementing a wrapping stream that reports an EOF if the logical end of data is reached. Similarly, if seeking is supported, the outermost stream may have to handle an appropriate offset.




#5184510 json c++

Posted by haegarr on 02 October 2014 - 01:32 AM


... However, it's only reading from a string created in the code. Is there supposed to be a way to load directly from a file in the directory? I was about to try to just load a string from a file using fstream and then parse it, but I thought there's perhaps a cleaner way (hopefully within the jsoncpp code) to do this.

Directly loading files is not necessarily the "cleaner way". It requires dealing with the differences in file path syntaxes (because many libraries are multi-platform), and it requires the source to be available as a stand-alone file, of course. Think of reading from a network source, or reading of file fragments embedded in an own file format. This is not that unusual. Games often use package files to reduce the amount of single files for performance and/or maintenance purposes. Or the embedding of preview images or the color profiles in e.g. PSD files. The library should also work if the stuff is already in memory (perhaps already loaded as a blob from a package file, or received from a network socket). Maybe it should also work with fragments only (similar to XML fragments) although encoding information are not available from the fragment any more.

 

IMHO, the cleaner way is to give a library a generic way for reading content, and do eventual file handling (locating, opening, closing) and loading externally to that.




#5183080 Creating a 3D Game Engine [Level Editor]

Posted by haegarr on 26 September 2014 - 02:14 AM


If you can think of anything else that would be essential for a level editor please add it.

A level editor is a tool used to assemble a game level. That is more than just placing geometry. Things like

  * support for game objects (CES)

  * bounding volumes

  * support for path-finding

  * baked lighting

  * attaching shader scripts

  * triggers

  * texturing

  * texture packing

  * sound & music

  * regions & portals

  * rigging

  * animation clips

  * animation trees & blending

  * NPC behavior

  * scripting

  * ...

  * and not to forget writing level files

come to mind.

 

So, what is essential? All that is needed for the type of game and level. Some stuff may be readily imported, and perhaps you have some tool chain that generate other stuff. I don't know; you told us not enough details.




#5182450 How to transfer/manage objects from one class to another

Posted by haegarr on 23 September 2014 - 12:05 PM


...

For instance: Should the coordinates x and y and a (rotation) not all be replaced by a matrix by itself. But then an object has two transformation => one for it's coordinates and one for it's actual placement in the world/parent...

The homogenous co-ordinate is an extension, so to say, that allows to express a translation by a matrix multiplication. Without that, a translation would be expressed by an addition.The advantage of having it as multiplication is that you can compute a single matrix for any combination of translations, rotations, and scaling.

 

What I call a placement is in fact a position and an orientation of an object relative to its super-ordinated object (I'm used to ignore scaling for a placement). A global placement is for an object in the world, and a local placement for a child object relative to its parent object. When you apply such a transform matrix you actually multiply geometry (vertex positions, normals, tangents) with that matrix. It is interpreted as "the vertex position / normal / tangent is given in model local space, but I want it in the global / parent space, hence I multiply with the respective transform matrix". Mathematically it plays no role what position you transform; so instead of using a vertex position you can use the point (0,0,1); remember that (a) we use homogenous co-ordinates, hence the "1", and that we are in model local space. So (0,0,1) is actually the local origin of the model. And when we transform the origin to the global / parent space, we actually have computed the position of the model in the global / parent space. Therefore we have 1 transform for both its geometry and its placement!

 

An example: The position of the model in the world should be (x,y,1) and its rotation should be the identity:

    M := R(0) * T(x,y) = I * T(x,y) = T(x,y)

The identity matrix (those where all elements are 0 but the main diagonal elements are 1) has no effect. Notice that I'm using row vectors here, so that the common order "geomtry is rotated in place, and the rotated geometry is translated" is written from left to right in the formula. Now, using this to transform the position (0,0,1) you get

    (0,0,1) * M = (0,0,1) * T(x,y) = (0*1+0*0+1*x,0*0+0*1+1*y,0*0+0*0+1*1) = (x,y,1)

the said model origin in global / parent space. Try it on paper; it works. :)

 

Wat exactly gets stored with a Placement object is a question of practice. It is convenient to store the affine transform matrix, of course. It may also be okay to store other parameters, e.g. a position and an angle, so that the matrix is computed from those parameters.

 


So you have an extra "coupling" object that has two parameters => the parent parameter and the child parameter. And you continuously concat the transformation of your child with the coordinates of your parent. 
Here again i think i don't really understand it all the way, because it would make more sense if the coordinates of the parent would be in a transformation matrix. In that way you can just set the transformation with the coordinate-transformation of the parent and your done.

All what is said above is only a prelude for parenting. As said, a placement defines a single spatial relation. For parenting, we have a defined spatial relation of the child w.r.t. its parent, say LC, and we want to compute the spatial relation of the child w.r.t. the world (for rendering purposes, for example), say MC. As we know, applying a transform matrix brings us from the local to the parent space. Here we want to go from the modal local to the parent space and further to the world space. So we do this in two steps and get a combined transform matrix

   MC := LC * MP

where MP is the transform matrix of the parent.

 

Doing so means that the Parenting object has the following parameters:

a) A reference to the model's global Placement, perhaps indirectly by a reference to the model itself; within this Placement the matrix MC is stored.

b) A reference to the parent object's global Placement, perhaps indirectly by a reference to the parent itself; within this Placement the matrix MP is stored.

c) The model's local Placement where the matrix LC is stored.

 

When the Parenting is called to update, it requests MP from the parent's Placement, requests LC from the own parameters, computes MC, and stores the result in the model's global Placement. Voila.

 


If the latter is correct, do you only have to call this extra object once  at initialization? Or even further: is this object really necessary? Can't i just set the parents transformation at the child's construction?

This extra object need to called whenever

a) the parent's global Placement matrix has changed, or

b) he model's local Placement matrix has changed.

c) and you need to access the current global Placement matrix of the child.

 

It is part of the update mechanism inside a game loop similar to (but not exactly the same as) those of, say, the animation sub-system.




#5182061 simple crafting system

Posted by haegarr on 22 September 2014 - 04:58 AM

SyncViews has mentioned many of the design flaws already. Here re some more:

 

1.) public bool ItemRecipe.CanCraft

 

a) ... should not be publicly writeable because there is no sense in setting it from outside the class, BUT ...

 

b) ... is not meaningful at all because for every change in the inventory the CanCraft may become obsolete

 

2.) public bool ItemRecipe.CheckForItemsNeeded(Player player)

 

a) ... needs an Inventory object but gets a Player object; that restricts flexibility and burdens with unneeded knowledge of what Player is

 

b) ... doesn't compute what you want (SyncView has mentioned it).

 

3.) public class HammerRecipe : ItemRecipe

 

a) Seconding SyncViews: Do this in a data driven way, not by inheritance! Even better: Do this in a data driven way, NOT by inheritance!! ;)

 


Be sure not to "overcode" things. You don't always need an elaborate class or several functions to handle something that could, technically, be stored in a handful of variables.

While this is true in principle ... Don't mix the representation with the data model and/or the mechanics. It is not over-engineering if data model, logic, and presentation are separated.






PARTNERS