Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


haegarr

Member Since 10 Oct 2005
Online Last Active Today, 03:31 AM

#5194923 opengl normal mapped objects alongside non-normal mapped objects

Posted by haegarr on Yesterday, 03:01 AM

So my post is similarly vague ...

 

How do you use the VAOs/VBOs? From the OP I assume that you generate 1 VAO, bind it, and let it alone forever. Then all handling of VBOs is to be done "manually" whenever a draw call is prepared, inclusive enabling/disabling of vertex attributes. Anyway, the entire relevant draw state should be specified with each (sub-)mesh to render, so that your low-level layer of rendering is able to compare what is needed with the current set-up (i.e. there is no rely on some draw state being set or being cleared). So (especially w.r.t. your case) switching VBOs and enabling/disabling the vertex attributes is done on a per mesh basis. If, on the hand, you use one VAO per VBO constellation, e.g. one for normal mapped and one for not normal mapped objects, things are not really different except that the draw state then is given with less parameters: It specifies the entire vertex assembly state by a single reference instead of a bunch of parameters.

 

How have you detected that "The object is there, it just is rendered invisible/transparent"? I meant, there is a difference between "no pixel is rastered", "all rastered pixels are discarded", and "the pixels are alpha blended with an alpha of zero". In all cases the objects is not on the screen, but the reasons are very different. For example, what happens if you take the normal mapping fragment shader and short-circuit all computations but simply put out a constant color with alpha 1? What happens if you short-circuit computations of the normals in the vertex shader?




#5194244 2 strange opengl problems - small window and reverse

Posted by haegarr on 23 November 2014 - 04:25 AM


Also, everything is reversed in the scene for some reason. the translate's work correctly, -1 x goes left, +1 x goes right, However Positive rotation around the Y axis results in the model rotating anti-clockwise. the model you see should be holding the club in the other hand. Would this be a fault in any of the rendering code I've shown?

Translation goes in 3 directions in general. What is with the other two's? Do they work as expected?

 

However, there is no real wrong or right, because it all is convention. You should pick a set of conventions and ensure that all works that way. For example, if you have a toolchain preparing your models, make sure that any export of the toolchain follows the conventions (so the runtime can expect all being well). Otherwise make sure that right after importing a model (placement, animations, ...) into the runtime, models (...) that do not follow the conventions are converted.

 

Typical things that need to be specified are at least:

  * left hand (LHS) or right hand (RHS) co-ordinate system

  * what are the right, up, and forward directions (considering the above spec.); e.g. right = +x, up = +y, forward = -z

  * direction of positive angled rotations; e.g. right handed

  * what is the standard front-facing direction of models; e.g. +z

  * what is the standard look-along direction of the camera; e.g. -z

  * what length does a world unit have (this plays a role for well-sized mode import); e.g. 1 world unit = 1 meter

 

For example: Having the (front clipping) camera as well as the model in initial orientation, when you see a model from the back but it has equipped its right hand as expected, then probably the front-facing of the model is wrong (rotated by 180 degree around up direction). If, on the other hand, you see a model from the back but it has equipped the left hand although the right hand was expected, then the model is probably imported as LHS (or RHS) but the application uses the other, resulting in a mirroring in one direction.

 

After defining the conventions, start to use simple models like a cube with differently colored sides. It is important that they are manually coded and not imported, so you know exactly that they follow your conventions. Then implement the transformations / math and ensure that they work as expected. After you have ensured that your system works like you want, you can start to import models from other sources. Because you rely on your system to do what you want, any anomalies now are probably based on the model itself.




#5193934 Line intersection algorithm?

Posted by haegarr on 21 November 2014 - 02:32 AM

One thing that is not okay (at least not in general) is testing structures for equality by using the == operator. E.g. 

var a = { x:0, y:1 };
var b = { x:0, y:1 };
alert(a==b);
gives you false. As long as you use simple structures, you can go the following way:
var a = { x:0, y:1 };
var b = { x:0, y:1 };
alert(JSON.stringify(a)==JSON.stringify(b));
although, from a performance point of view, I'd just use
if (t1.x1==t2.x1 && t1.y1==t2.y1) return false;
...

It is still readable. (IMHO copying values just for readability is often counter-productive; for example in check_for_intersection(...), copying the point elements to separate array elements requires me to leave the well introduced semantics of points, introspect what is done, and rethink of the arrangement when analyzing the remaining function body; that makes IMHO no sense.)

 

However, if you suspect check_for_intersection(...) being wrong, do you have tried to write some unit test for it? E.g. pump a couple of defined points into it and compare the result of what you expect. Parallel lines with the same length and a different length, anti-parallel lines, identical lines, orthogonal and not orthogonal lines, and, of course, variants that cross and variants that do not cross. Presumably, if you have seen cases where the algorithm fails, you may have already an idea of what constellation of lines break the run.




#5192381 glBlendFunc for multiple render targets

Posted by haegarr on 12 November 2014 - 01:54 AM


But I dont want any blending on my normal texture or world position texture.

Use glEnablei/glDisablei (i.e. the indexed variants) with GL_BLEND to enable/disable blending per buffer (see e.g. here).




#5192245 Scene Graph + Visibility Culling + Rendering

Posted by haegarr on 11 November 2014 - 05:01 AM

I'm not using a std::* solution for this. Instead, rendering is requested by writing a job (graphic state info plus draw call plus sorting key), mainly handled as blob at this level. The job data is written into a linear allocation memory area, the key and the job address is written into another linear allocation memory area. This makes allocation and (later) deallocation super fast, except if the memory area is exhausted (which is an exceptional case, of course). Sorting is done on the key/address pair only, with the key being the sorting argument. At the moment I consider no coherency, so I use radix sort.




#5192241 Deciding graphics interface on init

Posted by haegarr on 11 November 2014 - 04:49 AM

For me, the running executable has build in which technology to try at all, e.g. OpenGL on Mac, D3D on Windows. The abstract interface class of the graphics sub-system and its rendering device class are implemented in the engine core, but the derived concrete classes are, based on technology and version, implemented in own dynamic libraries (so I have a DLL for OpenGL 3 and another for OpenGL 4, for example). The init process tries to load them, if possible w.r.t. the version information fetched from the OS, in order of decreasing usefulness one by one until the first success. The result is a concrete instance of the sub-system class which acts as factory for a concrete instance of the rendering class. The API of the latter is small (method-wise) because it is data driven, i.e. rendering jobs are send to it, so the amount of virtual function calls is insignificant.

 

Although the above approach allows to load all runtime bindable graphics sub-systems in parallel, in normal mode only one is actually loaded, and only one is instantiated.




#5192229 Computing Normal Vectors

Posted by haegarr on 11 November 2014 - 01:47 AM

The normal is the same for the entire plane where the 3 vertex positions (triangle) or 4 vertex positions (quad) lay in. So you do not compute the normal in the middle of a square, you do compute the normal for the said entire plane.

 

With the assumption that the quad given by 4 corners in order p0, p1, p2, p3is planar and not degenerate, the difference vectors p1-p0 and p2-p0 can be used as u and v. Often also the "cross over" difference vectors p3-p1 and p2-p0 are used if a quad is given.

 

BTW: Computing a vertex normal is usually done not using the normals of the adjacent faces directly but using them weighted. The weights are often chosen w.r.t. the angle the face has at the vertex, so that smaller angles will have less effect.




#5192005 ECS

Posted by haegarr on 10 November 2014 - 02:23 AM

You know that there are many variations of ECS in the wild? I don't mean variation in implementation details but in architectural decisions. You should have at least an idea of what you want/need before start looking into code; otherwise you may become locked on the first thing you see. This forum has several threads discussing things like implicit vs. explicit entities, code in components vs code in sub-systems, direct access vs. messaging, and so on.

 

I doubt that anybody has done the work to formulate an ECS in pseudocode. Too much work IMHO. However, there are a few ECS implementations in source code available on the internet, for example the Artemis framework.




#5189364 OpenGL/GLSL version handling

Posted by haegarr on 27 October 2014 - 03:36 AM

Anyhow, I'm curious to know everyone else's approach to compatibility, as this is rather a new realm for me.

I use a 2 step mechanism, a coarse one bases on dynamic library selection, and a fine one based on late binding. This works because of a layered solution of the graphics sub-system.

 

First of, somewhere in the starting process of the application, a OpenGL context is created with the highest supported version. If this fails then a context with the next lower version is tried until the lowest supported version. This can be done in a comfortable way on Mac OS, because it gives one the highest available version when asked for a 3.2 core profile anyway. On Windows, the process is less comfortable. However, if a context was generated, the belonging dynamic library is loaded. If this fails, then again the next lower version for context creation is tried. In the end I hope to have a library successfully loaded, or else the application is not runnable, of course. 

 

Such a library brings in the lowest layer of the graphics sub-system w.r.t. the own implementation. It is able to process graphics jobs that are generated within the middle layer of the graphics sub-system. However, when a library was loaded successfully it has a defined minimum of features. Additional features or better implementations are then recognized by the set-up routine of the graphics sub-system layer loaded with the library. This is based on investigating the minor version number and the extensions. The layer then does symbol binding by its own.

 

Regarding shader scripts: Some scripts are assembled at the runtime, and this considers to use the mechanisms available from the loaded library. Pre-made scripts need to be available for each loadable library, of course, and hence are simply selected.

 

 

EDIT: BTW: The above mechanism is not only used for version compatibility but is abstract enough to also select between OpenGL and D3D.




#5188873 Game State management in Entity Component System architecture

Posted by haegarr on 24 October 2014 - 02:02 AM

ECS gives me migraines.
 
At first it sounds like a great idea, but you soon end up in a situation were a tiny change in the game can have a massive effect on performance.

Well, the performance hit can happen in any other architecture, too. 

 

It really depends on your definition of an "Entity".

 
Consider the full on approach where you have an entity that for everything and game objects are bags of entities. So a character running around the game would have ...
 
SkinnedMeshEntity, WeaponEntity, HealthEntity, BackpackEntity, etc.   Each of those would have it's own bag of entities. (TransformEntity, CollisionBoxEntity, ......)

ECS stands for Entity Component System, although IMHO "component based entity system" would be better. A "game object" is often used as a synonym for entity.

 

With this in mind ...

* SkinnedMesh is a component,

* Weapon is probably an entity by its own rights (because it should probably be exchangeable),

* Health is a component, or perhaps better a variable of a Constitution component,

* Backpack is perhaps an entity with an Inventory component (if used as an item by its own), or perhaps only an Inventory component.

 

Other things like e.g. terrain are (should be, at least ;)) never an entity, so "the full approach" is wrong from the start. However, what's wrong with a weapon, a clip of ammo, or a health pack (not the Health constitution!) being own entities? What's wrong with each of them having its own placement, its own mesh, and other components? Nothing, because your game needs this! You want the weapon to lie around on the ground, ready to be picked up. So it is placed in the world. The weapon has its own mesh clearly.

 

So suddenly this one game object contains hundreds of entities. Nasty but workable. Things get really nasty when you start needing references between entities.

Well, I think dozens of components would already be more than sufficient. However, the aspect you are indicating here, if I understand you correctly, is that parts that were usually embedded as simple variables, now become their own software objects. The typical example is the placement, since close to everything is placed somewhere in the game world and hence needs a placement. Point is that "close to everything" is not "everything", and with a reasonable amount of static geometry it may be even wrong at all as an argument. However, even if I accept that a placement is needed by close to everything, it is just a single isolated example in front of a much greater couple of counter examples. And of course you are not hindered in the end to embed some "very important components". E.g. Unity3D does so with the placement, too, although I do not agree with this approach ;)

 

The weapon needs bullets, so it has a reference to a BulletMagazineEntity. A skinned mesh needs an animation entity. A character needs a target to shoot at, which is an entity.... you see were this is going?

Again, use the solution that is suitable.

 

* A firearm weapon needs ammo, it does not need a BulletMagazine per se. After picking up a ammo clip, the ammo counter could simply be increased and the clip entity could be wasted.

 

* There is no technical / architectural reason why a skinned mesh (a component) needs an animation (a component). It is a decision of the designer. S/he could create a static skinned mesh entity. Also, it's the entity that needs both the skinned mesh and the animation (as said, by design).

 

* A character needs a target to shoot at, which is an entity ... well, what should it be otherwise? I go even further: I say that shooting is an action that checks against a BoundingVolume component of an entity, and if the hit test passes, it causes a defined decrement of a variable called "Health", to be found in the Constitution component of the targeted entity. There is no Constitution / Health? Then shooting to death is not possible. There is no CollisionVolume? Then aiming at the entity is not meaningful, perhaps not even possible. (One can see from this example, that components can be used also to tag entities for game related purposes. You can even mark a BoundingVolume to be used especially for shooting, for example.)

 

It gets even worse if you have multiple entites with a reference to the same entity. Null pointers can really ruin your day.
This happens regardless of the architecture, because it is a much more lower level problem. Even an unresolvable relation in a game item database has this effect.
 

...

 

Well, the thing is that the representation of problems comes from the game design. The software architecture should be powerful enough to solve the problems, and this not only acutely but also in the long term

 

As with everything, the programmer can make mistakes, regardless of whether ECS or not is in use. As with everything, also ECS is not the silver bullet. But for a common problem in game development, it is a better approach than others are. As said, this doesn't mean that every occurring problem should be mangled into the ECS; there are parts that do not fit.

 

 

Just my 2 Cents. Perhaps also a bit biased ;)

 

 

EDIT: Oh, forgotten to say: It is true that properly implementing an ECS is its own can of worms. There are many aspects to consider, many routines to go, and going the wrong route will have negative impacts like performance lost. It implies a great hurdle. But that doesn't mean that the architecture is bad in itself. Instead it means that there are good and bad approaches.




#5188771 Calculating Right Vector from Two Points

Posted by haegarr on 23 October 2014 - 11:42 AM

Let the spline be divided in segments, so that a sequence of positions

   { p0, p1, p2, ... , pn }

is given. The difference vector 

   di := pi - pi-1, 0 < i <= n

"ties" 2 consecutive points. Its projection onto the ground plane (assuming this is the x-y plane) is

   d'i := ( di;x, di;y, 0 )

and the belonging sideway vector, i.e. one of its both perpendicular vectors in the plane, normalized, is

   si := ( di;y, -di;x, 0 ) / | d'i |

 

This could be used to calculate the 4 corner points of a quad with width w for the segment i as

   pi-1, pi, pi + si * w, pi-1 + si * w

 

Although each segment gets the same width w, the result looks bad because of the said gaps and overlaps. This is because at any intermediate pi there are 2 sideway vectors si and si+1, and they are normally not identical. 

 

Now, a better crossing point would be located somewhere on the halfway vector

   hi := si + si+1

which is in general no longer perpendicular to one of the both neighboring segments, so that it cannot be simply scaled like

   hi / | hi | * w

in order to yield in a constant width of the mesh.

 

Instead, using a scaling like so

   vi := hi * w / ( 1 + si . si+1 )

does the trick (if I've done it correctly). It computes to a vector with exemplary lengths

   | vi | = w

   | vi |90° = 1.414 w

   | vi |-9 = 1.414 w

   | vi |45° = 1,08 w

what seems me okay.

 

Then the quad for segment 0 has the corners

   p0p1p1 + v1p0 + s1 * w

and intermediate segment's quad has the corners

   pi-1pipi + vipi-1 + vi-1

and the quad for segment n has the corners

   pn-1pnpn + sn * w, pn-1 + vn-1

 
Well, I hope I made no mistake. Please check twice ;)
 
However, I'd consider to use the spline as middle of the river instead of an edge.



#5188695 Calculating Right Vector from Two Points

Posted by haegarr on 23 October 2014 - 01:20 AM

I'm not sure what your problem exactly is. So I describe the entire process in a more or less coarse manner, and we can go into details at the step identified by you.

 

1.) Sub-divide the spline path into linear segments.

 

2.) Compute the (perpendicular) sideway vectors at the beginning and end of each segment.

 

3.) Compute a common vector for each pair of sideway vectors from the end of one segment and the beginning of the next segment. This step is used to avoid a gap on an outside bend or an overlapping on an inside bend, resp.

 

4.) Compute a sideway displacement of the line segment using a defined (half) width, so that a closed mesh is yielded in.




#5187639 "Bind" Entities to Entity

Posted by haegarr on 17 October 2014 - 07:40 AM

"Usually" doesn't mean "ever"... and there are many ways to do things. Say, what is a concrete example of your 10 meshes from the same model?

 

Look at a car with a body and 4 wheels. The 4 wheels each need their own placement because the combination of position and orientation is unique for each one. Their rendering material is different from those of the car body. You can now say that the entire thing is an entity, but then you need to have a sub-level of some kind of entities, and you need to express the relation. The car body has a mesh with, say, 6 sub-meshes, one for the tin parts and 4 for the windows. Each wheel has a mesh with 2 sub-meshes, one for the rim and one for the tyre. Each wheel has a global placement controlled by a parenting with the car body entity as parent (although I'm using a Chassis4Wheel component which not only provides 4 slots at once but also some physical simulation parameters).

 

When you look at the problem, there is no real reason why the wheels cannot be entities side by side to the car body entity, which are nevertheless related to the car body entity by parenting. Each entity by itself is complete and can be described by the entity/component concept. You are used to think of a car as one entity, but in fact it is just an assembly of many parts, each one existing by itself, and a broken wheel hub (the physical equivalent of the parenting) will separate a wheel from the car.

 

On the other hand, an antenna which will never be detached from the car (or spaceship) and has ever the same placement can easily be implemented as a sub-mesh.




#5187622 "Bind" Entities to Entity

Posted by haegarr on 17 October 2014 - 04:21 AM


But what if I have, for ex., a space ship consists from a 1000 meshes?
How to manage such situation? Need I create something like a mesh class with meshes list inside?

An entity usually has one mesh, but a mesh often need to be build by sub-meshes. Actually vertices are hold in sub-meshes, not in meshes. However, this is done to be able to attach different materials to different parts of the mesh. The placement of the entity in the world is used for the entire mesh.

 

If you have the need to attach meshes to other meshes at runtime, like e.g. the crew members and the spaceship, sub-meshes are obviously not the correct solution. Instead, you create a spatial relation between a crew member and the spaceship. This kind of relation is usually called "parenting" and implements a forward kinematic (but notice that other possibilities exist as well). Notice that parenting a crew member to the spaceship is done at runtime; the crew member may leave the spaceship at some time and hence the parenting will be destroyed. Other parentings may exist for the entire runtime of the game.

 

To express parenting you can use a tree hierarchy, i.e. you can have a child-entity list inside an entity. IMHO this isn't the best solution. I prefer to express relations between entities explicitly, i.e. to instantiate a Parenting object that will further be used to calculate the global placement of the "child" object from its local placement and the global placement of the linked parented object. So the instance of a Parenting object expresses the current existence of a relation, its kind, and the necessary parameters (the local placement and the link to the parent entity, in this case).

 


As I understood, I must get the entity`s children and call the transform on it recursively?

The important aspect is that things fetched from elsewhere are ready-to-use when being requested. One possibility is recursive calling. For example the global placement of a parented entity should be calculated. One term of the calculation is the global placement of the parent entity. When those placement is requested, the method first checks whether the placement is up-to-date. If not, it first calculates the new placement, perhaps again by requesting another placement. However, it returns the own global placement if and only if it is ready-to-use. The other solution to the problem is a so-called dependency graph. It is a structure (a sorted list is sufficient) where dependent objects occur logically behind objects they depend on. So when updating that structure it is guaranteed that dependencies are up-to-date simply because they are processed before the dependent objects.

 


The main thing I cannot understand is that all components are present in global list separately from entitty itself, so, in that case I could do transforms for the same entites few or more times and it will break everything.

Looking at the description of parenting entities above, what we have is the following:

 

* A collection of meshes, each one with a collection of sub-meshes.

* A collection of global placements, so that exactly one can be associated with each mesh.

* A collection of relation objects, e.g. Parenting instances. Each one means that the targeted global placement is dynamically computed.

 

Now think that during running the game loop one part is responsible to update the global placements (in fact there are more than one part, e.g. animation, physics, mechanisms like parenting, collision correction). This part iterates the collection of Parenting objects which may be given in dependency order. The update just needs to access a global placement and the Parenting code (which itself accesses the local parameters, of course). No need to access the mesh or whatever else makes the entity in its entirety. When later in the game loop the graphical rendering is processed, it just accesses the global placement and the mesh. There is no need to known about Parenting or such.

 

As you can see from this example, distinct stages during processing need access to only a particular subset of all of the entity components currently existing in the scene. Having such subsets already separated at hand is fine.




#5187614 Planning a Complex Narrative

Posted by haegarr on 17 October 2014 - 03:13 AM


Dialogue trees are closest to what I'm considering, but I'm not sure I'll go with the traditional method used. Instead I've worked out a sort of 'binary' system for ease of response, yes or no, positive, negative or neutral, agree or disagree, etc. Responses for quick interaction while preserving immersion; rather than reading through an extensive amount of text to select something closest to what you want. Which also permits ease of calculating an NPC's admiration or aversion towards the player.

During investigating the internet for meanings about in-game dialog structures, I came to a similar conclusion regarding to phrases to be said by the player.

* They should not be (too) verbose, especially but not exclusively if voice acting is in play. This allows to pick a phrase but being still interested in reading / hearing the verbose one.

* They should be marked regarding to their effect on the interlocutor, so players not speaking the game's language natively need not understand the nuances in sentences.

* The choices should be limited to a few (perhaps at most 5 or so).

* The game settings may provide to switch on/off a kind of help for conversation in that choices are sorted regarding how good they fit the player character or the story.

 

However, such things are controversial anyway...






PARTNERS