Jump to content

  • Log In with Google      Sign In   
  • Create Account

haegarr

Member Since 10 Oct 2005
Online Last Active Today, 06:34 AM

#5163609 sse-alignment troubles

Posted by haegarr on 29 June 2014 - 06:52 AM


for each traingle (lets call him abc - it has vertices abc) I
need to cross  and normalize to get the normal ,

Presumably (but I'm not an SSE expert, so someone may contradict me): The best performance for such a problem comes with a memory layout where each SSE register holds the same components of 4 vertices. I.e.

uint count = ( vertices.length + 3 ) / 4;
__m128 verticesX[count];
__m128 verticesY[count];
__m128 verticesZ[count];

Fill the arrays with the data of the vertices a, b, c of the first 4-tuple triangles, then of the second 4-tuple of triangles, and so on. In memory you then have something like:

verticesX[0] : tri[0].vertex_a.x, tri[1].vertex_a.x, tri[2].vertex_a.x, tri[3].vertex_a.x 
verticesX[1] : tri[0].vertex_b.x, tri[1].vertex_b.x, tri[2].vertex_b.x, tri[3].vertex_b.x
verticesX[2] : tri[0].vertex_c.x, tri[1].vertex_c.x, tri[2].vertex_c.x, tri[3].vertex_c.x
verticesX[3] : tri[4].vertex_a.x, tri[5].vertex_a.x, tri[6].vertex_a.x, tri[7].vertex_a.x 
verticesX[4] : tri[4].vertex_b.x, tri[5].vertex_b.x, tri[6].vertex_b.x, tri[7].vertex_b.x
verticesX[5] : tri[4].vertex_c.x, tri[5].vertex_c.x, tri[6].vertex_c.x, tri[7].vertex_c.x 
...
verticesX: analogously, but with the .y component

verticesZ: analogously, but with the .z component

 
Then computations along the scheme
dx01 = verticesX[i+0] - verticesX[i+1];
dy01 = verticesY[i+0] - verticesY[i+1];
dz01 = verticesZ[i+0] - verticesZ[i+1];
dx02 = verticesX[i+0] - verticesX[i+2];
dy02 = verticesY[i+0] - verticesY[i+2];
dz02 = verticesZ[i+0] - verticesZ[i+2];

nx = dy01 * dz02 - dz01 * dy02;
ny = dz01 * dx02 - dx01 * dz02;
nz = dx01 * dy02 - dy01 * dx02;

len = sqrt(nx * nx + ny * ny + nz * nz);

nx /= len;
ny /= len;
nz /= len;

should result in the normals of 4 triangles per run.

 


then i need to multiply it by model_pos matrix

Doing the same trickery with the model matrix requires each of its components to be replicated 4 times, so that each register holds 4 times the same value. It is not clear to me what "model_pos" means, but if it is the transform that relates the model to the world, all you need is the 3x3 sub-matrix that stores the rotational part since the vectors you are about to transform are direction vectors.




#5163582 sse-alignment troubles

Posted by haegarr on 29 June 2014 - 04:11 AM

You currently have an "array of structures" or AoS for short, i.e. a vertex (the structure) sequenced into an array. For SSE it is often better to have a "structure of arrays" or SoA for short. This means to split the vertex into parts, and each part gets its own array. These arrays can usually be organized better w.r.t. to SSE.

 

Which semantics have the 9 floats of your vertex, and what operation should be done on them?




#5163579 Easing equations

Posted by haegarr on 29 June 2014 - 03:43 AM

In all equations t is a local non-normalized time, so running from 0 to d as its full valid range. It is "local" because it starts from 0 (opposed to the global time T which tells you that the ease function started at T =  t0, so that t := T - t0). Each ease function on that site then normalizes t by division t / d, so this runs from 0 to 1. With this in mind, looking at the "simple linear tweening" function, you'll see the formula of a straight line with offset b and increase c. Without c (or c==1) the function would return values from b to b+1, but with c the change over t is "amplified" by c, and the result is from b to b+c.  For the other functions the use of c is the same: It is always used as an amplification of the result's change with t / d.




#5163576 encounter weird problem when turn on the color blending.

Posted by haegarr on 29 June 2014 - 02:56 AM

The most common problem with transparency is that rendering order of faces does not match the requirements of the rendering algorithm. The simplest algorithm needs faces to be rendered in back to front order. This implies meshes to be ordered in dependence of the view, that meshes must not be concave (or else they need to be divided up if a free view is allowed), and meshes must not touch or overlap (or else z-figthing will occur).

 

As WiredCat mentioned we need more details, but also above the code level. What algorithm is used? How are the meshes organized?

 

When you have a problem with a complex scene, reducing complexity first helps to narrow down the cause. E.g. Does the problem occur even if only a single organ is rendered, ...




#5163180 game pause - ho should be done?

Posted by haegarr on 27 June 2014 - 02:06 AM

I got it mixed, didnt expected that i could neet to run "draw path  " without "advance path"

A game loop should ever be separated into at least the sections (in order)

    1.) input processing,

    2.) world state update,

    3.) rendering.

 

In such a loop input processing provides abstracted desires the player has with respect to game world changes (e.g. the player's avatar should jump), that then together with the simulation time elapsed since the last pass is used to drive the world state update (the time delta actually drives AI, animation, physics). This gives a new snapshot of the world, and rendering then generates all still necessary resources and projects them onto the screen.

 

From this you can see that pausing a game need to influence (a) input processing because you don't want to string all input happening during pausing for the sake of avatar control, and (b) world state update. Rendering is a reflection of the current world state, and if it is run more often than once per world state update then it will show the same snapshot again. 

 

As was already mentioned above, stopping the world state update can be done by enforcing 0 as time delta. If wanted, toying with things like avatars breathing also / although during game pausing is possible due to the the 2nd timer mentioned by LS. However, input processing need to be handled explicitly. This is because further input need not be suppressed but routed to other handlers. Here explicit game state switching may come to rescue. 

 

Notice that the way how input is pre-processed is important. Input should be gathered, filtered, and all relevant input should be written in a unified manner into a queue from which the game loop's input handlers will read. The unified input should be tagged with a timestamp coming from the 1st timer, even if this may give you input "in the future" from the game loop's point of view. If the game gets paused and re-started, then a "discontinuity" will be introduced in the sequence of timestamps in the queue. This discontinuity helps in suppressing false detection of combos started before the pause and continued after the pause.




#5161680 opengl object world coordinates

Posted by haegarr on 20 June 2014 - 05:27 AM

one has several things to consider here.

 

1.) Numerical limitations of representations of numbers in computers will ever introduce some inaccuracy if only enough (non-identity) computations are made. This is the case with quaternions, too. The advantage of (unit-)quaternions is that it uses 4 numbers for 3 degrees of freedom, while a rotation matrix uses 9 numbers for the same degrees of freedom. That means that rotation matrices require more constraints, or in other words, that rotation matrices need a re-configuration more often than a quaternion does. However, this lowers inaccuracies only, but do not remove them totally.

 

So, if one accumulates rotations in a quaternion, it has to be re-normalized from time to time. If not, then the length of the quaternion will differ from 1 more and more, and that will disturb its use as orientation because only unit-quaternions give you a pure orientation / rotation. If one uses matrices, then they have to be re-orthonormliazed from time to time, which means their columns / rows have to be normalized and, by using the cross-product, to be made pairwise orthogonal. Doing such things is common when dealing with accumulation of transformations.

 

2.) You need to think about spatial spaces, and what transformation should happen in which space. If you define a computation like

    Mn+1 := Tn+1 * Rn+1 * Mn

you perform a rotation R onto an formerly translated and rotated model, since Mn contains both the current position and orientation. This means that a rotation R, which ever has (0,0,0) in its axis of rotation, will cause the model to rotate around an axis which is distant to the model's own local origin.

 

Instead, if you undo the current position first, so that

    Mn+1 := Tn+1 Tn * Rn+1 * Tn-1Mn = ( Tn+1 * Tn * … * T0 ) * ( Rn+1 * Rn * … * R0 )

you get the model rotate around its own origin ever. Here you accumulate the rotations for themselves, and so you do with the translations. This can be obtained by storing position and orientation in distinct variables, and applying translation on the position only and rotation on the orientation only.

 

Notice that the latter way does not keep you away from using the current forward vector for translation.




#5159104 Color grading shader

Posted by haegarr on 08 June 2014 - 11:28 AM

What solution you can think to avoid this problem when using linear filtering?

What Hodgman has mentioned in "2) You need to be very precise with your texture coordinates." already: Use the center of the texels! When e.g. red input is 0, and you address the LUT at 0/16 (or 0/256 for 2D), you hit the left border of the texel, and the sampler will interpolate 50% to 50%. However, with an offset of 0.5/16 (or 0.5/256 for 2D), you hit the center of the texel instead, and the sampler will interpolate with 100% to 0%. So a span ranges from 0.5/16 to 15.5/16 (or 0.5/256 to 15.5/256 for 2D) for an color channel input range of 0 to 15. Hence interpolation will be done inside a slice, but not crossing slice boundaries.

 

BTW: This is true not only for the 2D arrangement, but also for a real 3D LUT.

 

However that bit about using linear vs nearest, there was reason why I did not use linear

If you use nearest neighbor interpolation, you effectively reduce your amount of colors to 16*16*16 = 4096. I.e. a kind of posterize effect.




#5158729 Homing missile problem

Posted by haegarr on 06 June 2014 - 09:42 AM


After a bit of thought and some calculations, I discovered that the conversion of range from [0,360] to [180,-180] seems to be as simple as this:
   float current = 180 - rocket_sprite.getRotation();
With this, the code should work just as in the ActionScript example, since now it's all in the same range and at the same units.
It does not. The missile goes shakey, and there's no easing.

There are several aspects one needs to consider:

 

1.) Is the direction of 0° the same in AS and in SFML? E.g. does it mean straight to the right in both cases? Let us assume "yes".

 

2.) Is the rotational direction the same in AS and in SFML? E.g. does it mean counter-clockwise (or clockwise) in both cases? Let us assume "yes".

 

Then the half circle from 0° through +90° up to +180° is the same for both! There is no transformation needed. However, the other half of the circle is from +180° through 270° up to 360° in SFML and from -180° through -90° up to 0° in AS.

 

If you think of the periodicity of a circle, i.e. the same angle is reached when going 360° in any direction, the negative angles just go in the opposite rotational direction of the positive ones. So going 10° in the one direction is the same as going 360° minus those 10° in the other direction. That also means that it (should) make no difference whether you invoke setRotation with either -10° or +350°. As you can see, the difference of both is just 360°.

 
So why do we need to consider a transformation anyhow? Because the entire if-then-else block is written in a way that expects the angles being [0,+180] for the "upper" half circle and [-180,0] for the lower one.
 

So a transformation means that half of the values, namely those in the "upper" half-circle, are identity mapped (i.e. they need to be used as are), and only the other half of values, namely those of the "lower" half-cirlce, need actually to be changed. That is the reason several code snippets above, inclusive mine, use a stepwise transform that considers both halves of the circle separately.

 

It also means that the reverse transform, i.e. going back to the value space of SFML just before invoking setRotation, should not be required if SFML allows for negative angles, too. (I'm not sure about what SFML allows, so I suggested in my first post to do the reverse transform.)




#5158413 Homing missile problem

Posted by haegarr on 05 June 2014 - 09:18 AM

I tried
int current = rocket_sprite.getRotation() - 180;
but it does not quite work,

It doesn't work because although it matches the pure range of numbers, it does not consider the correct value space. For example, +90° in AS means +90° in SFML, but -90° in AS means +270° in SFML. I have assumed that in both cases 0° points into the same direction and in both cases positive angles are going counter-clockwise; if this isn't true, then things need more thinking...

 

I also think that the problem lies there, but I'm not sure how to fix it.

There are 2 ways:

 

1.) You can transform "current" and "rotation" into the same value space as is used by AS (i.e. the if-then-else stuff), and later on transform it back to the value space of SFML. I assume that this solution looks like

int current = rocket_sprite.getRotation();
current = current <= 180 ? current : current - 360;
rotation = rotation <= 180 ? rotation : rotation - 360;

followed by the if-then-else stuff, followed by the reverse transform

rotation = rotation >= 0 ? rotation : rotation + 360;

(I have not tested it.)

 

2.) Adapt the if-the-else stuff to the value range of SFML (which would be the better alternative, but requires although more thinking).

 

 

EDIT: There was an error in the reverse transform.




#5158385 Homing missile problem

Posted by haegarr on 05 June 2014 - 07:45 AM

AFAIS: ActionScript's atan2() returns a value in [+pi,-pi], so that the value of variable "rotation" is in [-180,+180]. Your equivalent seems to be sf::Transformable::getRotation(), where the result is stored in "current". The documentation says that getRotation() returns a value in [0,360]. However, your code implementation dealing with the delta rotation doesn't take this difference into account.

 

For example, if this condition

    if( abs( rotation - current ) > 180 )

becomes true, inside its body there are 2 branches, one requiring "current" to be negative and the other requiring "rotation" to be negative, but they are never negative by definition! As a result, the "rotation" is not altered, and you suffer from "stuck in places".




#5156948 3D Vector Art graphical effect

Posted by haegarr on 30 May 2014 - 08:03 AM


Regardless of how it is rendered, are there algorithms to selectively hide edges based on say, the face normal ...

There are algorithms that compare the normals of adjacent triangles to classify the shared edge as a feature edge. Silhouette edges and crease edges are examples (google for "feature edge silhouette crease", for example). Removing a cross-edge inside an n-gon can be done similarly: If the dot-product of the normals of 2 adjacent triangles is very close to 1, then the two faces can be considered co-planar and the edge between them can be flagged to be not drawn. Using the geometry shader, such things can be done on the fly on the GPU.




#5156915 Resource managers: how much is overkill?

Posted by haegarr on 30 May 2014 - 02:31 AM

I'm not sure whether I understand why exactly the OP is speaking of overkill. It seems me that using a package name within the resource name is making the problem. If so, the following description may help; if not … feel free to ignore my post ;)

 

When the system starts up, it scans a specific directory (the "overlay" directory) for files, and assumes each file to contain a single resource. It stores the names of the files as names of resources in an internal table of content. I'm using hashes for resource names at the runtime level, but the file-path names relative to the directory would work as well. The system further scans another directory (the "package" directory) for files, and each file determined to have the correct format is opened and its stored table of content is integrated with the one already in RAM. During this, if a resource name collides with one already in the internal TOC, it is ignored and the internal entry left as is; additionally, if the already contained entry refers to a package as source, a conflict is logged.

 

The table of content now has entries for each single resource. The resources' names managed therein are not tagged by a package name or whether they origin from the overlay directory. But these additional informations are stored along the resource name in the entries, of course.

 

A file offset and length is stored in the TOC for accessing a resource inside the package file. Now, the file offset and length does not necessarily address a single resource but may address a sequence of resources. This is called a load unit, because all addressed bytes are loaded (and later unloaded) at once. Nevertheless, the entries in the TOC are still names of individual resources. So requesting any resource of a load unit causes all resources in that load unit to be loaded. A "bill of materials", as may be stored along a resource to declare dependencies, ever lists each single resource.

 

Of course, before the loader is requested to load a resource load unit, the cache is interrogated whether a resource is already loaded.

 

With a concept like the one above, resources are requested by name and regardless of their storage location, and regardless of whether they are bundled in a load unit or stored in their own load unit. It allows for hot-swapping during development and for installing updates without the need to send an entire package file. All dependencies are still explicit on a resource by resource basis. If for some reason the toolchain decides to store a particular resource now in another load unit, the bills of material need no update.




#5156257 OpenGL Texture Coordinates

Posted by haegarr on 27 May 2014 - 05:03 AM


I'm using this tutorial pretty heavily, and its section on textures is confusing me to some degree.

Don't know what exactly is confusing you, so let me try to shed some light onto the entire thing. Some of the following is probably already known by you, but I need to mention it for completeness.

 

1.) You need a vertex data stream with the vertices' position. Engine often batch sprites to reduce draw calls. This requires the all vertex positions, although coming from different sprites, to be specified w.r.t the same space, e.g. the world space or the view space. Hence any motion applied to the sprites is already to be considered on the CPU side, before a VBO is filled.

 

Okay, you can use instancing and hence handle things another way, but that is an advanced topic.

 

2.) You need a vertex data stream with the vertices' uv co-ordinates. If you have only one texture to deal with, you need just one uv stream. If you have several texture, you may need more than a single uv stream. But it is also possible to use one and the same uv for several textures (e.g. when using a color map and a normal map with the same layout in texture space).

 

3.) For a sprite you usually don't use normals, because sprites are just flat (letting some exotic variants aside). Otherwise, if normals are available, you need a vertex data stream for them, too.

 

4.) Whether you use one VBO per data stream, or put all of them into a single VBO, is usually a question of how dynamic the data in each stream is. For example, sprites are often computed frame by frame and transferred to the GPU in a batch. When the CPU computes both the vertex positions and uv co-ordinates on the fly, then both streams are dynamic and can be easily packed into a single VBO. On the other hand, if the CPU computes just vertex positions but re-uses the uv co-ordinates as they are again and again, then the vertex position stream is dynamic but the uv co-ordinates stream is static; this would mean 2 different VBOs when looking at performance.

 

5.) However, it is crucial to the success that there is a one to one relation in the sequence of vertex positions and uv co-ordinates. On the GPU the position at index #n and the uv co-ordinate at index #n (as well as any other vertex data) together define vertex #n. That means if you load a model and drop uv co-ordinates for the sake of computing them afterwards, you have to ensure that the order you push uv co-oridnates into the buffer is absolutely the same as before.
 
That said, if you have simple geometry like sprites, it is IMHO better to generate both geometry and uv co-ordinates on the fly. On the other hand, if you have complex model geometry with uv co-ordinates delivered aside, then don't drop the latter but apply calculations on top of them if needed.
 
6.) With glVertexAttribPointer you make each vertex data stream known to and hence useable by the vertex shader.
 
7.) Samplers are the things that allow a shader to access a texture. To get that right you need to fill texture memory with texel data (e.g. using glTexImage2D as you do) with a specific texture unit being active (default is #0), and to tell the shader what texture unit to access with which sampler (the glUniform1i routine with the address of the sampler as target).
 
8.) There are 2 ways to configure samplers. Historically there is the setting within the texture itself, using the glTexParameteri routine as you do. This is not totally right, because the parametrization belongs to the access of the texture but not the texture itself. Hence there is a new way where parametrization is done on the samplers themselves. However, you do it the old way and it works fine so far, so let it be.
 
9.) Inside the vertex shader you have the uv co-ordinates of the current vertex, and a sampler where a texture is bound to. The uv co-ordinates are prepared so that they are ready to use. The shader calls the sampler with the supplied uv co-ordinate tuple, and gets back an RGBA value.
 
 
What you can see from the description above, there is always a unit on the GPU, be it a vertex input register, a texture sampler unit, or whatever (yes, there are more), and you have to tell both sides OpenGL and the shader which units you want to use.



#5156235 OpenGL Texture Coordinates

Posted by haegarr on 27 May 2014 - 02:52 AM


Well, it's the sprite sheet specifically that I'm struggling to find information on. As I mentioned, the information I've found has been for previous opengl versions. While many current version tutorials specify how to load, bind etc they, in my search, have passed over dealing with texture coordinates in opengl 4. As far as "changing the texture coordinates" I simply mean changing them from the default which displays the entire image to instead crop a specific set of coordinates.
There is no default in the sense of OpenGL. Each vertex needs an explicitly given pair of u,v co-ordinates. Assuming that the sprite is rendered as a quad, you have 4 vertices, and all of them have their own u,v pair. To map the entire texture once, supposing the use as GL_TEXTURE_2D, you explicitly use the entire span of [0,1] for u and [0,1] for v, yielding in the tuples (0,0), (0,1), (1,0), and (1,1) at the corners of the quad. Any subset of the texture has co-ordinates inside these spans. E.g. u being in [0,0.5] and v being in [0,1] denotes half of the texture as a vertical strip, addressed in the vertices as (0,0), (0,1), (0.5,1), and (0.5,0).
 
When computing such relative co-ordinates, one needs to consider some things. OpenGL used u co-ordinates from left to right, and v co-ordinates from bottom to top. The co-ordinate 0 means the left / lower border of the leftmost / bottom texel, and the co-ordinate 1 means the right / upper border of rightmost / top texel. With this in mind, if you want to address the texel with indices s in [0,w-1], t in [0,h-1], where w and h are the dimensions of the texture measured in texels, you have to use
    ul := s / w      for the left border of the texel
    ur := ul + 1 / w      for the right border of the texel
    um := ( ul + ur ) / 2      for the center of the texel
and analogously for v when using h instead of w.
 
So, when the sprite's texels are in a rect with the lower left corner (s1,t1) and the upper right corner (s2,t2), you compute
    ul(s1), vb(t1), ur(s2), vt(t2
and use them as u,v co-ordinate tuples for the vertices:
    ul(s1) and vb(t1)
    ul(s1) and vt(t2)
    ur(s2) and vt(t2)
    ur(s2) and vb(t1)
 

But, simply telling me to search "opengl 4 tutorial" seems to be less than helpful to me. I was under the impression this was a beginners forum. They could probably pass along that advice to a sticky and do away with the forum entirely I suppose. Or, let us ask our inane questions, because there is a lot of information out there, and sometimes asking in a forum to parse that down to something useful can be helpful to someone who is in the process of learning.
Excuse for my rashly answer above; but: Computing texture co-ordinates is independent on the OpenGL version. Passing vertex data (with texture co-ordinates being part of) is dependent on OpenGL version. The former topic, now that it's clear what "change" means in the OP, is declared above. The latter topic could be found in tutorials (you already have). Most of the OP deals with VBOs and vertex data passing, hiding the actual question. Just to my excuse. smile.png



#5156225 OpenGL Texture Coordinates

Posted by haegarr on 27 May 2014 - 02:08 AM


I've scoured the forums(and quite a bit of the internet-realm) but can only seem to find out of date ways of handling this. I apologize if this is a repeat question, but all of the threads I've found on this are quite old, or at least old enough they don't seem to apply.

Start the internet wide search engine of your choice and try "opengl 4 tutorial", and you'll get at least 3 hits on the first page already suitable to answer how to deal with textures in OpenGL nowadays.

 

Regarding sprite sheets (or texture atlases, to be precise): This is something that is independent on the OpenGL version. Any good tutorial should tell how texture co-ordinates are to be interpreted, and how the correlation between vertex position and texture co-ordinates is.

 


… the method of changing the texture coordinates ...

I don't understand what "change" means here exactly. Compute the correct vertex positions and texture co-ordinates on the CPU side, store them into a VBO, pass it to a shader, and use them to draw.






PARTNERS