Jump to content

  • Log In with Google      Sign In   
  • Create Account


haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 03:08 PM
*****

#5163832 How does resolution work?

Posted by haegarr on 30 June 2014 - 08:12 AM

The keywords are:

* multi-resolution images,

* resolution independent co-ordinate system,

* letter/pillow boxes.

 

Sprites can be available at different resolutions, and the set of sprites that best matches the given platform is chosen at the beginning. Also, don't think that 1:1 pixel mapping between the sprite and the screen is set in stone.

 

Regarding to sprite placement, collision detection and such: Pixels co-ordinates are bad for these purposes. Instead, use a virtual and resolution independent co-ordinate system. Map this to pixels lately during rendering.

 

Use window relative co-ordinates for the placement of GUI elements. E.g. the screen height is fine for normalization, so that the screen relative co-ordinates range from 0 to the aspect ratio horizontally and from 0 to 1 vertically. Further, allow for anchoring, so that GUI elements can be related e.g. to the left / center / right window border horizontally, and to the top / center / bottom border vertically. I personally do this by having 2 values per dimension, one that specifies the anchor position relative to the width or height, resp., and another that specifies an offset in relative co-ordinates (this time relative to the height for both dimensions), so that the anchor can be specified anywhere in the window.

 

The aspect ratio is the only real problem, if your playground is not allowed to be more visible for one player than for another due to competition reasons. In such a case you should work with pillar/letter boxes. Those don't need to be of ugly black but can be filled with some nice background.




#5163786 encounter weird problem when turn on the color blending.

Posted by haegarr on 30 June 2014 - 02:45 AM


Now it seems like not that easy to solve this issue. Some of you guys recommend me to resort all objects in a proper way, ...

Yep. Or else you try depth peeling as is suggested above by LS.

 


... but even if only a single organ displayed in the scene, the problem still remains. ...

Maybe this is because of the already mentioned concavity of the meshes? Question is, if you simplify the scene down to a single organ, do you have a chance to notice whether the problem occurs only if you look through a concavity. If so, then we are on the right track when suspecting the drawing order. A solution then will be to use sub-meshes.

 
But if concavity is not the cause, then we need to investigate further.
 


… And the thing is, in my case, each object is mapping to other stuff, resorting means changing everything. It does take time to solve this.  ...

This isn't a problem solely related to yours. It is common in game engines and elsewhere. And hence there is a solution :)

 

It is possible to have more than a single order on objects. Notice that it is recommended to have several organizing structures, one for each task to do. It is absolutely fine to have a logical organization of the objects, a spatial organization (if collision or proximity is an issue), a render order, and perhaps more. Don't stuck with the über scene graph approach, or you will be lost sooner or later!

 

For example, you iterate the scene description and detect all objects that need to be rendered. You insert each object into a list (which is emptied before the scene is iterated). After finishing, you sort the list by some criterion, in your case using the distance from the current camera. Object rendering then is done in the order given by the list. So rendering has no influence on other aspects of object organization, and nevertheless is done in the required way.

 


I think Ohforf sake is right, Transparency is not as simple as enabling color blending, maybe this is the key point here!

Absolutely. 




#5163773 Asset file format

Posted by haegarr on 30 June 2014 - 01:18 AM

Usually there are 2, perhaps 3 situations to be distinguished:

(1) "Raw assets" as are loaded by an editor for the purposes of being viewed and perhaps integrated into the project,

(2) "runtime assets" that are loaded by the game engine,

(3) and perhaps "runtime assets" that are loaded by the game engine, either provided for hot-swapping, software updates, or modding.

 

Raw assets can be stored in the native file format of the DCC or in a common interchange file format (e.g. Collada for meshes), obviously to have a chance to re-load it into the DCC and make changes. These assets are further given by individual files, although a single file may provide more than one asset.

 

Runtime assets are usually provided in a game engine specific file format. The purpose is to yield in fast loading, which means a binary format because it is more compact but also requires less pre-processing in the game engine. The tool-chain is responsible to convert from raw assets to runtime assets.

 

With respect to the normal game play runtime, one don't want to have to open one file per asset. This is because file opening costs time, and having many more or less small files consumes more storage footprint. The solution is to use some kind of package file format. It further allows assets to be sequenced which in turn allows for shorter load times.

 

Now, with respect to hot-swapping, updates, and/or modding, storing assets in a single big package is bad, because replacing an asset in a package is often painful. A solution is to use the same file format but allowing for single, additional files so that the content of these files override the corresponding content of the regular packages. The runtime asset loader can handle this for example if the overriding asset files are stored in a dedicated file directory apart from the main asset files.

 

With packages the question of different file suffixes for runtime assets vanishes obviously. Further, it it easier for the runtime asset loader to handle a single file suffix. It has to investigate the file content anyway. So for me using a single suffix for runtime assets is a good way.

 

Just my 2 Cents.




#5163712 Entity component system, component collection?

Posted by haegarr on 29 June 2014 - 03:26 PM

As ever with ECS: There is no single way...

 

When you say "controlling component" you refer to an actual component, or a subsystem?? For what I get you are using a subsystem to refresh current sprite/animation, something like I said in the 3rd solution. If an entity has a group of animations, the groupOfAnimationSubsystem will change the reference to the current animation depending on player state (running, jumping, etc.); if an entity has a group of sprites, the groupOfSpritesSubsystem will change the reference to the current sprite.
But what if you want to render 2 sprites for the same entity? I can't see a clear way of doing that, I always end up with a bigger component, something like "currentGraphics" that can contain more than 1 sprite and/or animation (or other effects).
By the way, you say "the pool of possible sprites is part of the controller", so that controller will be a component? Something like SpriteListComponent?

It depends on how the ECS is implemented. Components may be real objects and have behavior implemented, or else they may be data containers stored in sub-systems, or they may be descriptions for data in sub-systems. However, it does not really change the concept of what I've written above.

 

For what its worth, here is a more complete description: In my implementation the attachment of a SpriteComponent with an entity causes a Sprite element to be reserved within the SpriteServices (a Services is my implementation of a sub-system). A Sprite element refers to a SpriteResource where the actual image data is available. A SpriteController is a Component that defines a behavior, namely a kind of dynamics on top of a Sprite element. It causes an installment in the SpriteServices. This thing is defined not to target a SpriteComponent but a Sprite element. Doing such a thing means that instances of the same type of elements inside a sub-system may come from different sources, perhaps but not necessarily all different components.

 

Yes, the controller is a component, because in ECS you want to define that and how the sprite of an entity will be dynamic. A different kind of dynamic control means a different type of controlling component. However, I usually do not use something like a SpriteListComponent, because this would be a too generic component. As mentioned above, the actual sprites are all available as SpriteResource. The sprites in the world are available as Sprite elements in the sub-system. What any sprite control requires is (a) a reference which Sprite element to alter (i.e. which Sprite element belongs to the original SpriteComponent of the entity), and which SpriteResource to be set into the Sprite element under which situation. This is definitely more specific than a SpriteListComponent.

 
Coming now to the problem of 2 sprites for the same entity. I do not support this. If one wants to have 2 sprites coupled in some way, it has to be expressed explicitly as relation between 2 entities. This requires an explicit ParentingComponent which, you guess it, causes a Parenting to be installed in the SpatialServices sub-system. A Parenting uses 3 Placement instances: It targets the Placement which originates from the PlacementComponent of the current entity (the own global placement), the Placement which originates from the ParentingComponent (the own local placement), and the Placement linked by the PlacementComponent (the global placement of the parent). BTW, here you have another example of the concept where the same kind of sub-system elements come from different types of components.
 
Of course, you can define the SpriteComponent and hence Sprite element with the ability to have more than one sprite. However, this means 2 references to SpriteResource instances, 2 placements (because there is ever a spatial relation, too), perhaps 2 animation controllers, … IMHO this is better solved with 2 entities as described above.



#5163609 sse-alignment troubles

Posted by haegarr on 29 June 2014 - 06:52 AM


for each traingle (lets call him abc - it has vertices abc) I
need to cross  and normalize to get the normal ,

Presumably (but I'm not an SSE expert, so someone may contradict me): The best performance for such a problem comes with a memory layout where each SSE register holds the same components of 4 vertices. I.e.

uint count = ( vertices.length + 3 ) / 4;
__m128 verticesX[count];
__m128 verticesY[count];
__m128 verticesZ[count];

Fill the arrays with the data of the vertices a, b, c of the first 4-tuple triangles, then of the second 4-tuple of triangles, and so on. In memory you then have something like:

verticesX[0] : tri[0].vertex_a.x, tri[1].vertex_a.x, tri[2].vertex_a.x, tri[3].vertex_a.x 
verticesX[1] : tri[0].vertex_b.x, tri[1].vertex_b.x, tri[2].vertex_b.x, tri[3].vertex_b.x
verticesX[2] : tri[0].vertex_c.x, tri[1].vertex_c.x, tri[2].vertex_c.x, tri[3].vertex_c.x
verticesX[3] : tri[4].vertex_a.x, tri[5].vertex_a.x, tri[6].vertex_a.x, tri[7].vertex_a.x 
verticesX[4] : tri[4].vertex_b.x, tri[5].vertex_b.x, tri[6].vertex_b.x, tri[7].vertex_b.x
verticesX[5] : tri[4].vertex_c.x, tri[5].vertex_c.x, tri[6].vertex_c.x, tri[7].vertex_c.x 
...
verticesX: analogously, but with the .y component

verticesZ: analogously, but with the .z component

 
Then computations along the scheme
dx01 = verticesX[i+0] - verticesX[i+1];
dy01 = verticesY[i+0] - verticesY[i+1];
dz01 = verticesZ[i+0] - verticesZ[i+1];
dx02 = verticesX[i+0] - verticesX[i+2];
dy02 = verticesY[i+0] - verticesY[i+2];
dz02 = verticesZ[i+0] - verticesZ[i+2];

nx = dy01 * dz02 - dz01 * dy02;
ny = dz01 * dx02 - dx01 * dz02;
nz = dx01 * dy02 - dy01 * dx02;

len = sqrt(nx * nx + ny * ny + nz * nz);

nx /= len;
ny /= len;
nz /= len;

should result in the normals of 4 triangles per run.

 


then i need to multiply it by model_pos matrix

Doing the same trickery with the model matrix requires each of its components to be replicated 4 times, so that each register holds 4 times the same value. It is not clear to me what "model_pos" means, but if it is the transform that relates the model to the world, all you need is the 3x3 sub-matrix that stores the rotational part since the vectors you are about to transform are direction vectors.




#5163582 sse-alignment troubles

Posted by haegarr on 29 June 2014 - 04:11 AM

You currently have an "array of structures" or AoS for short, i.e. a vertex (the structure) sequenced into an array. For SSE it is often better to have a "structure of arrays" or SoA for short. This means to split the vertex into parts, and each part gets its own array. These arrays can usually be organized better w.r.t. to SSE.

 

Which semantics have the 9 floats of your vertex, and what operation should be done on them?




#5163579 Easing equations

Posted by haegarr on 29 June 2014 - 03:43 AM

In all equations t is a local non-normalized time, so running from 0 to d as its full valid range. It is "local" because it starts from 0 (opposed to the global time T which tells you that the ease function started at T =  t0, so that t := T - t0). Each ease function on that site then normalizes t by division t / d, so this runs from 0 to 1. With this in mind, looking at the "simple linear tweening" function, you'll see the formula of a straight line with offset b and increase c. Without c (or c==1) the function would return values from b to b+1, but with c the change over t is "amplified" by c, and the result is from b to b+c.  For the other functions the use of c is the same: It is always used as an amplification of the result's change with t / d.




#5163576 encounter weird problem when turn on the color blending.

Posted by haegarr on 29 June 2014 - 02:56 AM

The most common problem with transparency is that rendering order of faces does not match the requirements of the rendering algorithm. The simplest algorithm needs faces to be rendered in back to front order. This implies meshes to be ordered in dependence of the view, that meshes must not be concave (or else they need to be divided up if a free view is allowed), and meshes must not touch or overlap (or else z-figthing will occur).

 

As WiredCat mentioned we need more details, but also above the code level. What algorithm is used? How are the meshes organized?

 

When you have a problem with a complex scene, reducing complexity first helps to narrow down the cause. E.g. Does the problem occur even if only a single organ is rendered, ...




#5163180 game pause - ho should be done?

Posted by haegarr on 27 June 2014 - 02:06 AM

I got it mixed, didnt expected that i could neet to run "draw path  " without "advance path"

A game loop should ever be separated into at least the sections (in order)

    1.) input processing,

    2.) world state update,

    3.) rendering.

 

In such a loop input processing provides abstracted desires the player has with respect to game world changes (e.g. the player's avatar should jump), that then together with the simulation time elapsed since the last pass is used to drive the world state update (the time delta actually drives AI, animation, physics). This gives a new snapshot of the world, and rendering then generates all still necessary resources and projects them onto the screen.

 

From this you can see that pausing a game need to influence (a) input processing because you don't want to string all input happening during pausing for the sake of avatar control, and (b) world state update. Rendering is a reflection of the current world state, and if it is run more often than once per world state update then it will show the same snapshot again. 

 

As was already mentioned above, stopping the world state update can be done by enforcing 0 as time delta. If wanted, toying with things like avatars breathing also / although during game pausing is possible due to the the 2nd timer mentioned by LS. However, input processing need to be handled explicitly. This is because further input need not be suppressed but routed to other handlers. Here explicit game state switching may come to rescue. 

 

Notice that the way how input is pre-processed is important. Input should be gathered, filtered, and all relevant input should be written in a unified manner into a queue from which the game loop's input handlers will read. The unified input should be tagged with a timestamp coming from the 1st timer, even if this may give you input "in the future" from the game loop's point of view. If the game gets paused and re-started, then a "discontinuity" will be introduced in the sequence of timestamps in the queue. This discontinuity helps in suppressing false detection of combos started before the pause and continued after the pause.




#5161680 opengl object world coordinates

Posted by haegarr on 20 June 2014 - 05:27 AM

one has several things to consider here.

 

1.) Numerical limitations of representations of numbers in computers will ever introduce some inaccuracy if only enough (non-identity) computations are made. This is the case with quaternions, too. The advantage of (unit-)quaternions is that it uses 4 numbers for 3 degrees of freedom, while a rotation matrix uses 9 numbers for the same degrees of freedom. That means that rotation matrices require more constraints, or in other words, that rotation matrices need a re-configuration more often than a quaternion does. However, this lowers inaccuracies only, but do not remove them totally.

 

So, if one accumulates rotations in a quaternion, it has to be re-normalized from time to time. If not, then the length of the quaternion will differ from 1 more and more, and that will disturb its use as orientation because only unit-quaternions give you a pure orientation / rotation. If one uses matrices, then they have to be re-orthonormliazed from time to time, which means their columns / rows have to be normalized and, by using the cross-product, to be made pairwise orthogonal. Doing such things is common when dealing with accumulation of transformations.

 

2.) You need to think about spatial spaces, and what transformation should happen in which space. If you define a computation like

    Mn+1 := Tn+1 * Rn+1 * Mn

you perform a rotation R onto an formerly translated and rotated model, since Mn contains both the current position and orientation. This means that a rotation R, which ever has (0,0,0) in its axis of rotation, will cause the model to rotate around an axis which is distant to the model's own local origin.

 

Instead, if you undo the current position first, so that

    Mn+1 := Tn+1 Tn * Rn+1 * Tn-1Mn = ( Tn+1 * Tn * … * T0 ) * ( Rn+1 * Rn * … * R0 )

you get the model rotate around its own origin ever. Here you accumulate the rotations for themselves, and so you do with the translations. This can be obtained by storing position and orientation in distinct variables, and applying translation on the position only and rotation on the orientation only.

 

Notice that the latter way does not keep you away from using the current forward vector for translation.




#5159104 Color grading shader

Posted by haegarr on 08 June 2014 - 11:28 AM

What solution you can think to avoid this problem when using linear filtering?

What Hodgman has mentioned in "2) You need to be very precise with your texture coordinates." already: Use the center of the texels! When e.g. red input is 0, and you address the LUT at 0/16 (or 0/256 for 2D), you hit the left border of the texel, and the sampler will interpolate 50% to 50%. However, with an offset of 0.5/16 (or 0.5/256 for 2D), you hit the center of the texel instead, and the sampler will interpolate with 100% to 0%. So a span ranges from 0.5/16 to 15.5/16 (or 0.5/256 to 15.5/256 for 2D) for an color channel input range of 0 to 15. Hence interpolation will be done inside a slice, but not crossing slice boundaries.

 

BTW: This is true not only for the 2D arrangement, but also for a real 3D LUT.

 

However that bit about using linear vs nearest, there was reason why I did not use linear

If you use nearest neighbor interpolation, you effectively reduce your amount of colors to 16*16*16 = 4096. I.e. a kind of posterize effect.




#5158729 Homing missile problem

Posted by haegarr on 06 June 2014 - 09:42 AM


After a bit of thought and some calculations, I discovered that the conversion of range from [0,360] to [180,-180] seems to be as simple as this:
   float current = 180 - rocket_sprite.getRotation();
With this, the code should work just as in the ActionScript example, since now it's all in the same range and at the same units.
It does not. The missile goes shakey, and there's no easing.

There are several aspects one needs to consider:

 

1.) Is the direction of 0° the same in AS and in SFML? E.g. does it mean straight to the right in both cases? Let us assume "yes".

 

2.) Is the rotational direction the same in AS and in SFML? E.g. does it mean counter-clockwise (or clockwise) in both cases? Let us assume "yes".

 

Then the half circle from 0° through +90° up to +180° is the same for both! There is no transformation needed. However, the other half of the circle is from +180° through 270° up to 360° in SFML and from -180° through -90° up to 0° in AS.

 

If you think of the periodicity of a circle, i.e. the same angle is reached when going 360° in any direction, the negative angles just go in the opposite rotational direction of the positive ones. So going 10° in the one direction is the same as going 360° minus those 10° in the other direction. That also means that it (should) make no difference whether you invoke setRotation with either -10° or +350°. As you can see, the difference of both is just 360°.

 
So why do we need to consider a transformation anyhow? Because the entire if-then-else block is written in a way that expects the angles being [0,+180] for the "upper" half circle and [-180,0] for the lower one.
 

So a transformation means that half of the values, namely those in the "upper" half-circle, are identity mapped (i.e. they need to be used as are), and only the other half of values, namely those of the "lower" half-cirlce, need actually to be changed. That is the reason several code snippets above, inclusive mine, use a stepwise transform that considers both halves of the circle separately.

 

It also means that the reverse transform, i.e. going back to the value space of SFML just before invoking setRotation, should not be required if SFML allows for negative angles, too. (I'm not sure about what SFML allows, so I suggested in my first post to do the reverse transform.)




#5158413 Homing missile problem

Posted by haegarr on 05 June 2014 - 09:18 AM

I tried
int current = rocket_sprite.getRotation() - 180;
but it does not quite work,

It doesn't work because although it matches the pure range of numbers, it does not consider the correct value space. For example, +90° in AS means +90° in SFML, but -90° in AS means +270° in SFML. I have assumed that in both cases 0° points into the same direction and in both cases positive angles are going counter-clockwise; if this isn't true, then things need more thinking...

 

I also think that the problem lies there, but I'm not sure how to fix it.

There are 2 ways:

 

1.) You can transform "current" and "rotation" into the same value space as is used by AS (i.e. the if-then-else stuff), and later on transform it back to the value space of SFML. I assume that this solution looks like

int current = rocket_sprite.getRotation();
current = current <= 180 ? current : current - 360;
rotation = rotation <= 180 ? rotation : rotation - 360;

followed by the if-then-else stuff, followed by the reverse transform

rotation = rotation >= 0 ? rotation : rotation + 360;

(I have not tested it.)

 

2.) Adapt the if-the-else stuff to the value range of SFML (which would be the better alternative, but requires although more thinking).

 

 

EDIT: There was an error in the reverse transform.




#5158385 Homing missile problem

Posted by haegarr on 05 June 2014 - 07:45 AM

AFAIS: ActionScript's atan2() returns a value in [+pi,-pi], so that the value of variable "rotation" is in [-180,+180]. Your equivalent seems to be sf::Transformable::getRotation(), where the result is stored in "current". The documentation says that getRotation() returns a value in [0,360]. However, your code implementation dealing with the delta rotation doesn't take this difference into account.

 

For example, if this condition

    if( abs( rotation - current ) > 180 )

becomes true, inside its body there are 2 branches, one requiring "current" to be negative and the other requiring "rotation" to be negative, but they are never negative by definition! As a result, the "rotation" is not altered, and you suffer from "stuck in places".




#5156948 3D Vector Art graphical effect

Posted by haegarr on 30 May 2014 - 08:03 AM


Regardless of how it is rendered, are there algorithms to selectively hide edges based on say, the face normal ...

There are algorithms that compare the normals of adjacent triangles to classify the shared edge as a feature edge. Silhouette edges and crease edges are examples (google for "feature edge silhouette crease", for example). Removing a cross-edge inside an n-gon can be done similarly: If the dot-product of the normals of 2 adjacent triangles is very close to 1, then the two faces can be considered co-planar and the edge between them can be flagged to be not drawn. Using the geometry shader, such things can be done on the fly on the GPU.






PARTNERS