Jump to content

  • Log In with Google      Sign In   
  • Create Account


haegarr

Member Since 10 Oct 2005
Online Last Active Today, 04:14 AM
*****

#5148508 Is this code to calculate the datasize of an ARGB8 texture?

Posted by haegarr on 21 April 2014 - 04:16 AM


Is that "A full MIP map chain ends with the 1x1 texel texture inclusive" also applied to other texture formats like dxt5 or dxt1?

DXTn is an encoding of texture data. Due to its block nature the last MIP level that fills in the entire block is 4x4 texels. However, with doing some padding one can get down to 1x1 again. Searching for the problem brings e.g. this thread to light, where Evil Steve answers your question in post #5. 

 

Of course, the code snippet in the OP does not consider compression but raw texture data. When considering compression, you should use the mipMaps <= ld( max( width, height ) ) formula instead of the conditional break, and you need to restrict mipHeight and mipWidth to the respective block's edge length instead of 1, and you need to multiply the resulting size with the compression ratio (depending on the compression scheme and sometimes the texel format, but fortunately a fix rate therein).




#5148403 Is this code to calculate the datasize of an ARGB8 texture?

Posted by haegarr on 20 April 2014 - 02:17 PM

A full MIP map chain ends with the 1x1 texel texture inclusive. But when being used on a GPU, a MIP map chain just needs to be complete, and that may be before the 1x1 texture is reached. So it depends on how the MIP map level limits are chosen.

 

Your code snippet seems me correct as long as the variable mipMaps doesn't cause the 1x1 texture be considered several times. I.e. the value of mipMaps should be

    0 <= mipMaps <= ld( max( width, height ) )

 

Alternatively also breaking the loop just behind "size+=tempSize;" like so

    if (mipHeight==1 && mipWidth==1) break;

would work in this sense.

 

 

EDIT: Just for clarification: The ld(x) function in the formula above means the "logarithm dualis", i.e. the logarithm to base 2.

    ld( x ) := log2( x )

In case that it isn't available in your favorite math lib, it can be computed by using any other logarithmic function when dividing by the same logarithm of the base. To compute ld when e.g. the natural logarithm or the decimal logarithm is available, this would look like

    log2( x ) = loge( x ) / loge( 2 )

    log2( x ) = log10( x ) / log10( 2 )

You perhaps want to round to the nearest integer to avoid floating point problems.



#5148337 Why is the first spritesheet laid out like this? Seems harder to read the spr...

Posted by haegarr on 20 April 2014 - 05:15 AM

It should be mentioned that "sprite" nowadays does not necessarily mean the old school image rectangle. Advanced sprite techniques exist that allow for lighting effects, for example. There is also a technique that allows for animation by mesh morphing. The game made by using this is one about zoo animals; I currently do not remember its name. This technique is actually based on irregular meshes and hence would principally allow for an "overlapping" packing like in the 2nd atlas. However, that technique also requires a higher texel resolution than those shown in the OP to look good. (In other words, the 2nd atlas is probably not an example of this technique.)

 

Using irregular bounding boxes is also not exotic these days. Texture packers usually export a file besides the texture, wherein the clips are stored by name/index and texel co-ordinates. However, the usual texture packers still deal with AABB only, even if they support bin packing.

 


Please stop calling them “sprite sheets”. It’s an insult to the industry.

Well, this is interesting. In the world of perhaps not exactly professional 2D game makers there are many examples of tools and libraries that actually use the term "sprite sheet". There are for example Texture Packer, Zwoptex, cocos2d, and perhaps most noticeable Flash (at least in CS6). So the term is effectively introduced in the pre-industrial phase.




#5147417 camera relative coordinates and setting the camera

Posted by haegarr on 16 April 2014 - 12:06 PM

so this means all i have to do is put the camera at the origin, rotate it, then translate everything else in view relative to the camera (the origin) before drawing, right?

A model transform M, followed by the view transform of a rotated RC and then translated TC camera looks like (using row vectors)

      M * ( RC * TC )-1

what is equivalent to

      = M * TC-1 * RC-1

 

So yes, you can do an inverse translation manually before the view transform is applied, just to "simulate" the camera positioning.

 

Writing M also as rotation followed by a translation, we get

      = RMTM * TC-1 * RC-1

where a combined translation

      TMC :=  TM * TC-1 = T( tMx - tCx ,  tMy - tCy ,  tMz - tCz )

is contained. It is computed as difference of the 2 positions from the camera to the model. With objects close enough to the camera, the combined translation can be expressed using a matrix of type double again, so that a total matrix transform

      RM * TMC * RC-1

would be a possible way to go.



#5147352 Memory management of resources

Posted by haegarr on 16 April 2014 - 06:56 AM


@haegarr I think your concept of a resource pool is too much for my needs. It seems to me that the resource pool purpose is to reduce the dynamic allocations using new / delete but for now that isn't an issue. And besides, even when using the resource pool I'll still have to manage the memory manually but instead of calling delete on the pointer, I will need to tell the resource pool that the memory is not used anymore.

Even worse (from the standpoint of overhead): The client need to return the resource to the library. Only if the library determines the returned resource to be no longer in use (e.g. by internal reference counting) and the usage policy allows the resource to be unloaded (e.g. the resource is not marked to remain for the runtime of the entire game, the runtime of the current level, or similar situations), then the resource is unloaded and the memory returned to the allocator. The allocator then decides how to handle the memory, perhaps returning it to the free heap.

 


I'm considering going with an intrusive reference counting approach. It seems to me to have the most convenient usage with not too much overhead...

Of course, a concept like described has many facets, and not all of them are needed in every situation. Using a reference counting system without caching and pooling is fine if resource generation time is not a critical factor (as long as you have at least an interface where resource sharing takes place).




#5147298 Memory management of resources

Posted by haegarr on 16 April 2014 - 01:29 AM


@haegarr: I agree with the sole responsibility principle but I still can't see how you would solve the problem I described. Given a resource that wasn't loaded using the resource loader but was manually created how would you go about and manage its memory? or are you saying that every memory allocation would go through the resource pool?

Yes, that is what I say. Although I should have noticed that the resource pool is one specialization of memory allocation in general. The pool strategy works fine for object allocations on the heap, and its use reduces the amount of invocations of new and delete. Other use cases like e.g. graphics API provided buffers will need other strategies, of course, but can also be hidden behind an allocator interface.

 

Considering your problem, a resource generator works more or less identical to a resource loader when seen from the outside: It provides a resource object.

 

While the memory ownership is given to the allocator (e.g. a pool), the management of a resource lifecycle as an object may be different in dependence on whether a generator or a loader was the supplier. For static resources it is typical that the resource is loaded from mass storage and, due to the lag of mass storage compared to RAM, hold in a cache. Strictly seen, such a cache is just one possible implementation of a resource runtime storage. So it is arguable that in fact the interface to resources is given by a resource library, and such a library may use a resource cache as internal storage solution. (Such a library is often named a resource manager, but IMO the composite of library, cache, allocator, loader, and perhaps others only together define the manager.)




#5146669 Tips for Programming a GUI

Posted by haegarr on 13 April 2014 - 03:47 AM

The implementation of a GUI system has aspects of input processing, graphical representation, and effect.

 

1.) Clarify whether immediate mode or retained mode GUI is what you prefer.

 

2.) Define how input is handled (fetched, collected, prepared) before it is provided to consumers.

 

3.) Define whether provided input is pushed to consumers or pulled by consumers. E.g. when running down an iteration of the game loop, the various sub-systems are called anyway and hence may investigate provided input whenever they are updated. This is an alternative to the push model which is known from application programs.

 

4.) GUI elements compete with other GUI elements and with game controller objects. Define how routing of input to the various possible consumers is to be done. Is the gameplay view just another widget?

 

5.) Define how the effects of action widgets (e.g. when pushing a button) is processed. E.g. does it invoke a callback, sends a message, or sends a command object?

 

6.) Define how the effects of value widgets (e.g. when dragging a slider knob) is processed. E.g. alter a local value, use a callback, send a message, send a command object, or change the target value in-place (the latter is fine for tweaking).

 

7.) Depending on the targeted platform: Make the graphical rendering resolution (DPI) independent by using scalable graphics / multi-resolution graphics.

 

8.) Even more important (IMO): Make the placement independent on the aspect ratio by using various anchors (horizontally left / center / right, vertically top / center / bottom) and a resolution independent length (e.g. normalized with the display height).

 

 

Hopefully you don't need things like table or outline views and drag & drop, because they make things really complicated.




#5145638 Eliminating OpenGL/DirectX differences

Posted by haegarr on 09 April 2014 - 06:13 AM


...

This is really everywhere, thats also the reason I had to reverse the texture coordinates manually in the application for the fullscreen-quad and the sprite. I also mirrored the textures from the filesystem when loading them with FreeImage, which makes normal model texcoordinates work. Now, is there any way to solve this? ...

I assume you suffer from the window co-ordinate problem: The book excerpt (I already mentioned it above) tells this with the "present transform" in eq. 39.1 and 39.2 and the following explanation "Window Origin". It mentions three ways to overcome it, all with some kind of drawback, of course.




#5145284 Get position of rotated rectangle

Posted by haegarr on 08 April 2014 - 04:31 AM

Using trigonometry, you can see the sides as the hypotenuses of right angled triangles.

topLeft = (x, y);
topRight = (x + (width * cos(angle)), y + (width * sin(angle)));
bottomLeft = (x + (height * sin(angle)), y + (height * cos(angle)));
bottomRight = (x + (width * cos(angle)) + (height * sin(angle)), y + (width * sin(angle)) + (height * cos(angle)));

Almost correct. But 2 of the sine terms need to be subtracted instead of added.

 

For example, if rotation with positive angle goes clockwise (and I made no mistake): 

topLeft = (x, y);
topRight = (x + (width * cos(angle)), y + (width * sin(angle)));
bottomLeft = (x - (height * sin(angle)), y + (height * cos(angle)));
bottomRight = (x + (width * cos(angle)) - (height * sin(angle)), y + (width * sin(angle)) + (height * cos(angle)));

 

EDIT: In general it is more comfortable to use matrices for such stuff.




#5144936 change 1D addres to 2D addres

Posted by haegarr on 07 April 2014 - 01:10 AM

If w is a power of two, then you can optimize hugely though by replacing the % with a mask (&) and the / with a shift (>>).

An extension to this suggestion: If x need not to be continuous but is a kind of packed address, w can be chosen "arbitrarily" as a suitable constant (probably 12 bits in the case that x is a float, because it provides 24 bits in its mantissa). This would allow for 4k by 4k textures at most.




#5144817 Eliminating OpenGL/DirectX differences

Posted by haegarr on 06 April 2014 - 03:03 PM

I'm interested in what is wrong with my previous answer. It would be nice if you can clarify this...

 

1 - Change the matrix layout(row/column major)

IMHO I have not said something contradicting this. However, the freedom is not without costs. With the input registers of the GPU being exactly 4 elements wide, it is most efficient if the columns / rows of a matrix passed into the GPU are 4 elements wide. This is no problem with a 4 by 4 matrix, obviously, but it matters if one passes 4 by 3 matrices (as is sufficient for affine transformations because the remaining row / column is constant; the most obvious use case is GPU based skinning).

 

Within an application that was implemented following OpenGL's historical conventions, a 4 column by 3 row matrix in column major order nevertheless requires 4 input registers although only 12 elements are actually used. Analogously are things with D3D's historical conventions. Hence the conventions of both OpenGL and D3D were changed (I'm still speaking of a convention but not a constrained). Fortunately both changes are so that the absolute sequence of values is again the same for both OpenGL and D3D. So passing them by cbuffer / UBOs makes no difference, assuming the expected layout is used. That's what my point 1.) in the above answer is about. 

 

2 - Mat * Vec/Vec * Mat depends ONLY on your math library.
I mentioned in point 2.) that both products can be computed in shaders, too.
 
BTW: It is not totally true that the order depends only on the math library. With the layout parametrization of matrices one can pass in matrices so that they work as being transposed. Because GPUs do not distinguish between N by 1 and 1 by N vectors, it is sufficient to transpose the matrix if one wants to reverse the order of matrix products inside the shader. So the order of multiplication depends on both how matrices are provided by the math library and how they are passed into the shader.

 

3 - There is no such a thing "LEFT/RIGHT hand coord system" for the hardware. You can use any coord sys. 

Yes, you can, but you need to take care that camera-space coordinates are transformed into the intended clip-space coordinates (which differ between D3D and OpenGL). You do this by defining an appropriate projection matrix. The projections for a LH and a RH co-ordinate system will differ. With the well known standard projection matrix PGL of OpenGL in mind, applying a mirroring onto the z axis yields in the corresponding LH matrix. That's what my point 3.) in the above answer is about. Is there a mistake in this reasoning?

 

 

@OP:

There is a sample book chapter The ANGLE Project: Implementing OpenGL ES 2.0 on Direct3D (PDF) that deals with OpenGL ES 2.0 being implemented on top of Direct3D 9. That is not exactly what you are after, but perhaps some of the things mentioned there may be of interest for you. Two aspects that came to my mind are the different clip spaces and the different window co-ordinates, both of which are investigated in the book chapter. Hope that helps.




#5144747 Eliminating OpenGL/DirectX differences

Posted by haegarr on 06 April 2014 - 10:05 AM

I'm not 100% sure, but ...

 

1.) OpenGL 4 is used to work with column vectors and matrices in row major order. D3D11 is used to work with row vectors and matrices in column major order. In memory this makes the same sequence of numbers.

 

  EDIT: So, after some more research … HLSL assumes column-major order at default, and GLSL assumes column-major order, too. Just D3D9 FFP and DirectXMath use row-major layout.

 

2.) Using shaders give you the freedom to choose between A * B and B * A like you need. You can use column / row vectors in both HLSL and GLSL. 

 

3.) If you want to use LH co-ordinate system with OpenGL, the only thing you need to do is a scaling S(1,1,-1) placed between the projection matrix and the view matrix. Because the projection matrix is different anyway, you can include the mirroring into the projection matrix used for OpenGL, like so when using row vectors:

     V * ( S(1,1,-1) * PGL )

 

4.) The fact that you need to negate m22 of the projection matrix … seems me strange. I assume that there is a mistake somewhere.

 

  EDIT: After some research: The reason is in the window origin problem.




#5144524 Problem with vignette shader on PC

Posted by haegarr on 05 April 2014 - 02:34 AM


Had a suggestion on another forum and this is pretty close to working.

So, what is wrong with using gl_FragCoord in the end? I've used it in a test implementation on a MacBook and it worked well. You also mentioned it is working on an iPad. And from a logical point of view it should work anyway...

 


The only thing now is that it is positioned a bit too far across and down the screen.

The shader script shown in the post above expects a full-screen quad rendered with texture co-ordinates [0,1] from left to right and [0,1] from bottom to top. If you supply the drawing with such a quad, then there should be no need to adapt the center variable.

 


And I dont have the nice oval shape I had in Codea anymore.

This is due to the aspect correction factor in the shader code. If you want the oval back, then remove the stroked part in the following line:

       aspect_center.= (v_vTexCoord.x-center.x* (resolution.x/resolution.y) ;




#5144316 Problem with multyple function with same name but different parameters and i...

Posted by haegarr on 04 April 2014 - 05:55 AM

I'm curious as to why this would be a fix.

 

At least the last error

1>main.cpp(114): error C2661: 'SpecialEffectHolder::AddSpecialEffect' : no overloaded function takes 2 arguments

let me assume that the main.cpp includes the HPP, finds a 3 and a 4 parameter variant of the routine, but has an invocation with 2 arguments. Hence it is not aware that the 3 parameter variant has a default argument and hence should be invokable with 2 arguments, too. 

 

Now assuming that the other errors

 

1>main.cpp(109): error C2664: 'void SpecialEffectHolder::AddSpecialEffect(int,SpecialEffect &,int *)' : cannot convert parameter 2 from 'int' to 'SpecialEffect &'
1>main.cpp(110): error C2664: 'void SpecialEffectHolder::AddSpecialEffect(int,SpecialEffect &,int *)' : cannot convert parameter 2 from 'int' to 'SpecialEffect &'
1>main.cpp(111): error C2664: 'void SpecialEffectHolder::AddSpecialEffect(int,SpecialEffect &,int *)' : cannot convert parameter 2 from 'int' to 'SpecialEffect &'
1>main.cpp(112): error C2664: 'void SpecialEffectHolder::AddSpecialEffect(int,SpecialEffect &,int *)' : cannot convert parameter 2 from 'int' to 'SpecialEffect &'

are invoked with 3 arguments although targeting the 4 parameter variant, the compiler tries to fit it into the one and only 3 parameter variant it knows of, and requires the int to reference conversion.




#5144311 Problem with multyple function with same name but different parameters and i...

Posted by haegarr on 04 April 2014 - 05:41 AM

Write the default value into the declaration, not into the definition:

//HPP
void AddSpecialEffect(int posX, int posY, SpecialEffect & specialEffect, int * SpecialEffectVecID = 0);
void AddSpecialEffect(int UnitVecID, SpecialEffect & specialEffect, int * SpecialEffectVecID = 0);


//CPP
void SpecialEffectHolder::AddSpecialEffect(int PosX, int PosY, SpecialEffect & specialEffect, int * SpecialEffectVecID)
{
    
}
void SpecialEffectHolder::AddSpecialEffect(int UnitVecID, SpecialEffect & specialEffect, int * SpecialEffectVecID)
{
   
}





PARTNERS