Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


haegarr

Member Since 10 Oct 2005
Offline Last Active Today, 11:58 AM

#5214212 Resource Management

Posted by haegarr on 03 March 2015 - 08:39 AM

As said by previous posters: Start with the mesh class you have, and implement a simple renderer using it. This is the only way to constantly check that things are working. Then begin to refactor your code.




#5214184 Resource Management

Posted by haegarr on 03 March 2015 - 06:26 AM


I have many different kind of resources: images, opengl textures, vbo's, ebo's, vao's, rbo's, shaders, shader programs. (did I miss something?)

Please notice that different kinds of resources requires different handling as well. The usually meant kind is that of assets like meshes, animation data, sound clips, texture images, scripts, and so on. This stuff is designed, resides on mass storage and need to be loaded into memory. On the other hand, texture objects, VBOs, IBOs, RBOs, and such are resources in the sense of production means. They are not loaded from mass storage but requested from the graphics API (OpenGL in your case). Furthermore, when thinking of VBOs and IBOs and similar buffer objects, the memory behind it may also need to be seen as a resource to be managed, for example if you want to batch meshes. Similarly there may be a management of texture areas in the sense of atlases or levels in a texture array object. And at the end of the spectrum there are resources like texture units that are available in limited amount, depending on the running platform, and hence may need to be managed, too.

 

Where to start ... well, that is a good question. In fact you cannot render anything if not both assets and production resources are available. What do you have already? Is loading of assets already implemented? Is a basic renderer already implemented?




#5214129 Java Listeners

Posted by haegarr on 03 March 2015 - 01:46 AM

It s actually not necessary to send input asynchronously at all (at least in the described situation). It is absolutely fine to collect input inside the input sub-system to have some kind of short input history, and to let higher level sub-systems (like player control) investigate the input queue to look for input situations that match their needs. This is because during the execution of the game loop the various sub-systems are updated anyway, and they usually can not do something meaningful with input until their are updated.

 

EDIT: Ah, Ashaman73 already mentioned this with "input pulling".




#5213846 fork issue

Posted by haegarr on 02 March 2015 - 02:38 AM


1. Why does my child never return (i.e. "Child ended." is never printed, even if the launched executable ended)?

The manual page for execve (for which execl is a front-end) states: 

execve() does not return on success, and the text, data, bss, and stack of  the calling process are overwritten by that of the program loaded.[...]

That means that the program containing "printf("Child ended.\n") is replaced by the program denoted by "executableName". Hence anything written behind execve and companions is invoked if and only if execve has failed.




#5213164 Order of matrix multiplication

Posted by haegarr on 26 February 2015 - 01:29 PM

For the following we need to distinguish between "row/column vectors" which means that there is a matrix with 1 row/1 column, resp., and "row/column major layout" which denotes how the 2D structure of a matrix is linearly stored in 1D computer memory. I state this explicitly just to make clear which terms I use for which property.

 


You also mentioned that Pre V Post multiplication depends on weather we are using row or column vectors. I was under the assumption that the major of the matrices we use also plays a part in it as: A * B = (BT * AT)T.
In a matrix product A * B the left side matrix A is called the pre-multiplicand, and the right side matrix B is called the post-multiplicand. Here "pre" and "post" just denote the sequence of matrices when reading the expression from left to right. In itself, it depends on nothing but is just a naming scheme.
 
What I meant in my previous post was that if one needs to choose whether to pre- or to post-multiply, one has to consider both (a) whether either row or column vectors are used, and (b) what you want to achieve. For example, you may want to apply a translation T onto an already existing transformation M, and you are using row vectors. Then the solution would be to post-multiply M by T. Why? Because of using row vectors, a vector v need to be written on the left (due to the "row dot column" prescription), and T should be applied in the space M results in. In summary, including the vector v:
   v * M * T

 

On the other hand, you may want to apply a translation T in the local object space before an already existing transform M is applied, and you are using column vectors. Then the solution would be ... again to post-multiply M by T. In summary, with an analog reasoning as above:

   M * T * v   

 

So the very interesting is that, just looking at the both transforms, we've M * T in both cases. This is because we have exchanged both the row/column vector usage as well as the locality in space. See, we have only 2 possibilities M * T and T * M, but we have 4 combinations of row/column vector usage and logical application before/after another transform. So each possibility is good for 2 combinations.

 

 


To elaborate, given a matrix that contains scaling information S, and translation information T. If the matrices are row major, and T contains translation in it's last row then the multiplication order S * T will work. But if we are using column matrices and the translation is in the last column of T then the multiplication order needs to be T * S. The resulting matrix in either case should contain the original scale and the translation. Is that not correct?

Err, it is not correct in general, because you imply a specific use case already. In fact it is totally legal to first translate and scale in the resulting space. As I mentioned in one of the posts above, using an arbitrary center of scaling is absolutely a feature one may want, and this can be reached only if scaling is done in a translated space.

 

So again: The transformation S * R * T (in case of row vectors) has a convenient order because it allows to express any transform (possible with that primitives) with the lowest amount of primitive matrices. But that does not make it the one and only solution.

 

BTW: "Translation in its last row" is not a requirement though. In fact, translation is stored where the homogenous co-ordinate is. This may be any row/column. Using the last row/column is just another commonly used convention but nevertheless a convention only. Of course, all matrices in a computation need to follow the same convention.




#5213052 Order of matrix multiplication

Posted by haegarr on 26 February 2015 - 03:04 AM

Forgotten to answer to this part:


In HLSL this would mean:
float4x4 transform = mul( mul( rotation, scale ), translate);
float4 worldPosition = mul(vertex, transform);
However in GLSL it would be:
mat4 translation = translate * scale * rotate;
vec4 worldPosition = translation * vertex;

That is not correct in so far that neither HLSL nor GLSL prescribe you to use row or column vectors. It is totally legal to use

  HLSL: float4 worldPosition = mul(transform, vertex);

  GLSL: vec4 worldPosition = vertex * translation;

as well.

 

BUT: Mathematically neither of the variables in my snippet is the same as its partner in your snippet. Instead, one of them is the transposed form of the other. This is very important, because in HLSL/GLSL you cannot directly see this. Moreover, as long as the matrix in question is a vector, both HLSL and GLSL simply make no distinction between them; instead they simply imply that a pre-multiplicand is a row vector in case that it is a vector at all, and a post-multiplicand is a column vector in case that it is a column vector at all. Nevertheless, in case that the argument is not a vector, you as the programmer has the responsibility to ensure the correct form of the matrix.

 

For example, you have an own matrix math library that works using column vectors (we let the memory layout aspect aside here). Hence a matrix fetched from the library can be used directly in HLSL when using mul(matrix, vector) as well as in GLSL when using matrix * vector, but it cannot be used in HLSL when using mul(vector, matrix) or in GLSL when using vector * matrix. However, using the transpose operator, it can be used in HLSL as mul(vector, transpose(matrix)) and in GLSL as vector * transpose(matrix).

 

Hope that helps.




#5213050 Order of matrix multiplication

Posted by haegarr on 26 February 2015 - 02:47 AM


To get a transformation matrix we have to concatenate three matrices: one for translation, one for rotation and one for scaling.

If you want to translate and rotate and scale, then you have to concatenate at least 3 dedicated transformation matrices. If you want additional kinds of transformations then there are more dedicated matrices involved. If you want more freedom (center of scaling, axes of scaling, center of rotation) then you need more dedicated matrices, although then the types of additional matrices are rotation and translation again. More on this at the end of this post.

 


The order of the concatenation matters, as each operation is relative to the origin of the matrix. This is regardless of handedness.

Correct so far, but I don't know whether "origin of the matrix" is a proper wording. I would say that each particular transformation happens with respect to a space, and the properties of the transformation may cause specific mappings of special points or directions in this space. The interesting rules are:

* The point 0 is always mapped onto itself when using a rotation or a scaling. 

* A point on a space axis is mapped onto the same axis when using a scaling.

 


The concept of pre v post multiplication is a separate issue from concatenation order.

The concept of pre- and post-multiplication is because of the matrix product not being commutative. However, whether to use pre- or post-multication in a particular case depends on whether you use row or column vectors and it depends on the concatenation order you want to apply.

 


The correct order of concatenating these matrices is as follows: First Rotate, this will rotate the object around it's point of origin. Next Scale, since we don't want the scaling to affect how far the object is translated from origin it must be scaled first. Finally Translate.

There is nothing like "the correct order of concatenation". Any order is correct w.r.t. a use case. However, there is one order where the particular transformations do not influence one another, and that order is scaling, followed by rotating, followed by translating.

 

Why? Because of what I've written above: Scaling has the 2 mapping properties, namely the center and the axes. But the axes are altered by a rotation. Hence doing the rotation first would have an influence on scaling. On the other hand, rotation just map the origin onto itself, and the scaling does so, too, so scaling does not influence rotation.

 

In general, however, and here we come back to the question of whether a combined transformation always consists of 3 matrices, you may want to use a rotation with an arbitrary center, and you may use a scaling with an arbitrary center and axes. In such a case, rotation and/or scaling themselves are no longer represented by pure rotation or scaling matrices, resp., but by combinations of them together with translations and rotations.

 

For example, the transform node in X3D uses arbitrary scaling axes and an arbitrary common center for rotation and scaling. When using column vectors (hence read it right to left), the decomposed form looks like

    T * C * R * A * S * A-1 * C-1

where T, R, S denotes translation, rotation, and scale, resp., C denotes the center for scaling and rotation, and A denotes the axes for scaling.




#5213048 Handling of modifier keys

Posted by haegarr on 26 February 2015 - 02:13 AM

On top of Aressera's and Strewya's posts:

 

The problem comes from looking at input as events. There is no need to send input asynchronously to any and all sub-systems, so don't do so. Instead collect (more or less) raw input from the OS, encode it into a unified structure including a time stamp, enqueue them, and let the sub-systems access the queue to investigate the current state and the (short time) history of input. This allows for arbitrary key press combos as already mentioned, but it also allows to easily check for temporal dependencies (e.g. key presses in sequence and whether a combo key was pressed in time).




#5212442 [SFML] Distance between random placed & random spawn number each time

Posted by haegarr on 23 February 2015 - 07:26 AM

1.) Randomizing the amount of items.   

static const int MinNumBlocks = 4;
static const int MaxNumBlocks = 6; // must be greater than MinNumBlocks

sf::Sprite leftBlock[MaxNumBlocks];

int numBlocks = MinNumBlocks + rand() % ( MaxNumBlocks - MinNumBlocks );

for (int idxBlock = 0; idxBlock < numBlocks; idxBlock++) {
    ...
}

2.) Ensuring a minimal distance between items by relocating if the minimum distance to any already existing item is fallen below a threshold.

static const float SquaredMinDistance = 20.0f;
static const int MaxNumTrials = 10;

for (int idxBlock = 0; idxBlock < numBlocks; idxBlock++) {
    for (int trial = 0; trial < MaxNumTrials; trial++) {
        x = rand() % 400 + 60;
        y = rand() % 400 + 60;
        bool okay = true;
        for (int idxCheck = 0; idxCheck < idxBlock; idxCheck++) {
            float xDist = x - leftBlock[ idxCheck ].getPosition().x;
            float yDist = y - leftBlock[ idxCheck ].getPosition().y;
            okay = okay && (( xDist * xDist + yDist * yDist ) >= SquaredMinDistance );
        }
        if( okay ) {
            break;
        }
    }
    leftBlock[ idxBlock ].setTexture( BLOCK );
    leftBlock[ idxBlock ].setPosition( x, y );
}

(Its all untested code, but it should show the idea.)




#5212273 Help understanding Component-Entity systems.

Posted by haegarr on 22 February 2015 - 09:25 AM

One more question, about your first example:

struct Entity
{
  int Id;
  std::vector<TComponent*> Components;
};

Doesn't that vector cause problems with inheritance? If I try to run a function from a component that inherits from the base component class/struct, won't it only run the base component's function instead of the inheriting component's?

The reasons for virtual functions in C++ is just that: Although you have a pointer to an object of the base class, the object may in fact be of any class inheriting that base class, and invoking a virtual function already declared in the base class then in fact invoke an implementation overridden by the derived class. A typical candidate would be Component::update(). BUT ...

 

... one possible concept of ECS, and that concept is favored by BeerNutts, is to make components as data holders only. Any usage (i.e. a function working on that data) are concentrated in sub-systems (see again BeerNutts first post and look for "MovementSystem" and "EnemySystem", for example). Another concept would be to allow for both data components and behavior components, but still making a distinction.

 

Why is this useful? Look at a component that represents the placement of the entity in the world. It may be manipulated by a controller or animation first, then read by the collision sub-system, perhaps a collision resolution is needed that again alters the component's value. Later is is read by the graphic rendering to determine the world matrix. Such a data component can best be understood as (perhaps complex) variable: It has a type (and can/should additionally have a semantic meaning), but how it is used is outside of the scope of the variable itself.




#5211861 Yet Another Procedural Planet (and some shader advice please)

Posted by haegarr on 20 February 2015 - 04:48 AM


My question is: given the lack of #include in GLSL, in a situation with multiple complex shaders (as in Bruneton) where there is a lot of overlap between functions, #defined constants, uniforms, is there any good generic advice on how to structure things? [...]

While GLSL lacks a build-in #include directive, OpenGL allows the shader code to be supplied in several pieces (see glShaderSource()). This is one way to implement an inclusion system by yourself, either implicitly (simply by "knowing" the structure) or explicitly (by some superimposed pre-processing).

 


[...] Part of me wants to stick all of the uniforms/#defined constants in a big uniform block and include that. But I could use some advice from the pros.

Nowadays uniforms are usually provided by UBOs. As such they are declared in one or more blocks. I don't know how relevant it is for your use case, but in a typical 3D scenario one defines several uniform blocks dependent on the sources and update frequencies: 1 block with pipeline stage parameters, 1 block with camera/view parameters, 1 block with material parameters, and so on. 




#5211856 Remapping barycentric coordinates to barycentric coordinates of a sub-triangle?

Posted by haegarr on 20 February 2015 - 04:09 AM

And the derivation is:

 

The point does not change its cartesian co-ordinates, so

    p( a,b,c ) = p( a',b',c' )

where

   p( a,b,c ) = a * p1 + b * p2 + c * p3

   p( a',b',c' ) = a' * p1 + b' * p2 + c' * ( p2 + p3 ) / 2

which gives (by comparing the coefficients)
   a' = a
   b' = b - c' / 2 = b - c
   c' = 2 c
 
That matches your solution for b > c. It does not hint at the need for a case distinction. Now, if c < b, then p would be outside the nominated sub-triangle. As such a', b', and c' cannot all be positive.
 
 
So ... I'm not sure why you made the case distinction!?



#5210841 Disassociate mouse with its position

Posted by haegarr on 15 February 2015 - 09:20 AM

I'm no expert for Windows problems, so there may be a better way. However, you can set the cursor back to the screen's center after receiving any mouse movement, using SetCursorPos or some similar function. IIRC, setting the cursor this way does not introduce own mouse movement events, so you need not distinguish between regular and irregular movements.

 

BTW: The issue is not related to OpenGL. It would be better placed into another forum.




#5209417 Light-weight render queues?

Posted by haegarr on 08 February 2015 - 09:29 AM


That's what I don't understand. Constant buffers, texture slots, samplers, drawtypes, depthstencil buffers etc dosn't sound like "high-level data". A texture unit or slot for example sounds like something privy to the renderer rather than a high-level scene object. What am I missing?

Constant buffers, texture slots, depthstencil buffers, ... are operating resources (hence resources not in the sense of assets). If you have "high-level data" like material parameters or viewing parameters or whatever is constant for a draw call, they can be stored within a constant buffer to provide them to the GPU. From a high-level view it's the data within the buffer that is interesting, not the buffer which is only the mechanism to transport it. From a performance point of view, it's the transport mechanism that is interesting, not the data within. Same for textures.

 

With programmable shaders the meaning of vertex attributes, constant parameters, or texture texels is not pre-defined. It is just how the data is processed within a shader script that gives the data its meaning. To give a clue of how it is processed, the data is marked with a semantic.

 

Now, does the renderer code need to know what a vertex normal, a bump map, or a viewport is? In exceptional cases perhaps, but in general it need not. It just need to know which block of data need to be provided as which resource (by its binding point / slot / whatever). The renderer code does not deal with high level data, it deals with the operating resources. That is what swiftcoder says.

 

State parameters for the fixed parts of the GPU pipeline are different since they need to be set by the renderer code explicitly.




#5209394 calculating z coordinate of camera

Posted by haegarr on 08 February 2015 - 06:10 AM

You say that I didn't provide you with w and h but isn't that my 1680 and 1050 or am I missing a step?

You wrote that the texture is 1680 by 1050 pixels which is a resolution. You wrote that the aspect ratio of the plane is 1680/1050 which is, well just a ratio. If you meant that the edge lengths of the plane are 1680 by 1050 length units in worlds space, than all is fine.

 

Also doesn't happycoders give me the z distance in pixels and not translated to z axis?

Dimension analysis of the formula:

    [ z ] = [ h ] * [ tan(a) ]

where

    [ tan(a) ] = 1

so

    [ z ] = [ h ]

 

With respect to my first post above, where I hinted at the need for a plane in world space, you get

    [ z ] = [ h ] = 1 lu  (which means length unit)

 
So, if you feed that h as world dimension, you get that world dimension back.





PARTNERS