Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 27 Jan 2014
Offline Last Active Today, 03:27 AM

Topics I've Started

Problems with tangent to world space conversion

28 March 2015 - 09:53 AM

I added normal mapping to my project a while back, which seemed to work as intended so I didn't think much about it until now that I tried to also add specular mapping. This did not look right at all and after some debugging it seems that my normals (sampled from a normal map) don't get converted into world space properly after all.


So here is how I'm reading the normals:

float3x3 matIT = transpose(float3x3(normalize(IN.tangent), normalize(IN.binormal), normalize(IN.normal));
float3  normal = NormalMap.Sample(DSampler, IN.texcoord).xyz * 2 - 1; // Convert to -1 .. +1 range
normal = normalize(normal.x * matIT[0] + normal.y * matIT[1] + normal.z * matIT[2]); // Bring into world space

The IN.normal, IN.tangent and IN.binormal come from the per-vertex input data and are multiplied by the rendered mesh's world matrix prior to being sent to the pixel shader (where the above calculations occur).



The odd thing is that this does yield properly looking normal mapping results, but when I try to use these normals for specular highlights it is off.

Rendering the output normal vector of a spinning cube further suggests that for example the vertices facing upwards (after rotation by mutation of its world matrix) will have the correct, upwards facing normal direction for 4 faces but not all 6 as I can only imagine it should if this was working properly.

The fact that the cube normals do indeed change when rotating (albeit not always correctly) suggests that it shouldn't be a mere oversight of the above actually transforming the normal into object space instead of world space either.

It is times like these I regret I didn't take some extra math courses when I had the chance... 

Any help or pointers would be greatly appreciated. smile.png

Simplifying Assimp 3.1-imported FBX scenes

27 February 2015 - 04:48 AM

I've recently updated to version 3.1 of the Assimp library for importing 3D models into my game engine.

With this came support for importing FBX files which is nice, however the FBXImporter attempts to retain a lot of separate settings that are present in the file but that do not have any direct correspondence in the aiNode / aiNodeAnimation structs. This is then achieved by creating a lot of extraneous nodes for carrying static transforms for things like pivots and offsets to all the separate scaling, rotation and translation steps. Naturally this creates a pretty overly complicated node hierarchy that both uses a considerable amount more memory and is slower to evaluate for animation purposes. The additional transforms won't be changed by my application at all so I don't need to retain them if they could be transformed into a more concise representation. I thought about just having these offset matrices being part of my Node class but apparently there are potentially up to 10+ of them, so that would be a big waste considering most of them would be set to the identity matrix at most times anyway.


As such I have been thinking about trying to tidy it up a bit by preprocessing these transforms into the animation channels, however I am not entirely sure whether this is feasible. Can you even transform things like quaternions and scale vectors (which are stored as vectors instead of matrices in order to facilitate interpolation) by matrices?

If anybody have some pointers on where to start with this kind of undertaking I would greatly appreciate hearing them.

Per-mesh bone animation with Assimp?

18 January 2015 - 08:04 AM

Apparently the assimp library specifies bone data on a per-mesh basis where scenes (models) are imported with one or more meshes.

In my importing routine I just ignore any bone data that is defined for more than one single mesh (ie. two meshes refer to a bone with the same name, I'll just read it from the first mesh where it occurs and ignore the other one). May I be shooting myself in the foot here due to there actually being circumstances where two meshes may have different animations for the same bone / joint? It doesn't seem likely to me (never mind the case where two completely separate objects imported from the same scene have bones sharing a name) but then why is it stored like that...?


I just noticed that going over my model importing code that I wrote about a year ago so it might just be something I'm not remembering mind you, such that it is there because each bone has vertex weighing that differs for each mesh, but it doesn't seem that is stored along with the bone data?


Edit: oh hah, seems that last thing I wrote is exactly it, rubber ducking for the win huh? xD

Still so, the posed question of whether animation data may be different on a per-mesh basis is still interesting, can anyone with 100% certainty say that this will never be the case?

What kind of performance to expect from real-time particle sorting?

23 September 2014 - 04:33 PM

This is mostly a theoretical question.

In the recent days I have implemented a simple GPU-driven particle system for use in my engine and after finally tracking down some annoyingly elusive bugs it is running quite well.

However I realized that sorting the individual particles so that they are drawn back-to-front (for proper alpha blending) seems to take significant processing time compared to just generating, updating and drawing the individual particles.

For example I get a frame rate drop of about 35 times if I compare running some 8000 particles with and without depth sorting (I get ~2100 FPS witout it and ~55 with on a relatively high-end GPU).

Is this to be expected due to way in which sorting has to be done on the GPU; eventually you need a single thread to traverse the whole list to ensure everything is in proper order? Or is there some kind of in-place sorting algorithm that can have separate threads work on separate sub-sections of the particle list without having to have a final step that puts all the pieces together? I have been unable to think of one.


My sorting algorithm is a pretty straight forward mergesort implementation; I first perform a swap-sort to bring sub-lists of two elements in order, then I proceed by calling a MergeSort shader program that merges two adjacent pre-sorted sub-lists 1 through log2(numParticles) times where the sub-list size goes from 2 and doubles for each step. This obviously means I have lots of threads doing little work in the first passes and the finally just one thread comparing all elements in the entire buffer at the end.



Ideally I would of course want to be able to perform the sorting in a quicker way (I suppose the key to this is better work distribution but as said I'm having a hard time figuring out how to accomplish that). If that is indeed not possible I guess I'll have to tinker with sorting parts of the buffer in each frame, but that can of course permit the ocassional artifact to slip through.

Geometry shader-generated camera-aligned particles seemingly lacking Z writing

06 September 2014 - 05:10 PM

I find myself having another relatively baffling issue when playing around with billboard GPGPU-based particle rendering.

This time I've got a proper render in all but excessive z-fighting that seems to occur due to the order in which each billboard (particle) is being drawn varies between frames. I tried to record a video reference of the issue but for whatever reason Fraps decided to only record a black screen tonight. I can try to get a properly recorded video up later if needed but I thought I would post this before bed tonight still.


If I disable my alpha testing and go purely with alpha blending it appears that indeed the completely transparent pixels of closer particles will sometimes overwrite opaque pixels of particles being drawn behind those, suggesting that completely transparent pixel writes seem to fill out the depth buffer. The billboard quads aren't particularly close to each others at all so this cannot be a normal z-fighting problem as far as I can tell.

May it be that I'm forgetting some render state I ought to set? Or is this a common "problem" that has to be solved by ensuring that my individual quads are created back-to-front from my geometry shader?