Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 27 Jan 2014
Offline Last Active Jun 28 2015 03:59 AM

Topics I've Started

Inconsistent falloff for light sources

05 May 2015 - 04:30 AM

I've been trying to implement lighting in my project, essentially following the method that is described here: http://www.3dgep.com/texturing-lighting-directx-11


While this does seemingly work, I've noticed some discrepancies where it essentially appears that certain meshes receive more light than they should based on their distance from the light source(s).

Here's an image showcasing this issue with a spot light:


The light source is situated a small distance above the floor and is aimed straight along the Z axis. As can be seen the light fall off such that it is essentially non-inifluential at the floor where the wall is, yet the wall receives a lot more light than I feel it should at this distance.


Still, one may think that this could possibly be related to the fact that the imaginary light rays hit the wall straight on as opposed to the floor and as such would light it up more. However, the same problem is also evident with omnidirectional light sources which in this case should have the same effect on all surfaces it hits (including the floor). As can be seen in the following image this is not the case however.

The dark circle is a semi-transparent sphere indicating the position of the light source and not a shadow of the sphere above it by the way; no shadow mapping has been implemented yet.





I am wondering what might be the cause of this.

All walls / floor / ceiling are just instances of the same quad mesh repositioned and rotated using world matrices so I don't believe it would relate to one (read: all) of these having incorrect normal data or such.

Also when only rendering the calculated attenuation (see the link above), it does indeed fall off as it "should" from the light source and opposite walls do not get a greater influence so the problem shouldn't be related to the attenuation factor either.


From what I can gather, it would in fact seem the problem relates to this function, since only returning C from it also causes proper falloff:

 * Calculates the diffuse contribution of a light source at the given pixel.
 * @param L		- Light direction in world space
 * @param N		- Normal direction of the pixel being rendered in world space
 * @param C		- Light colour
float4 CalculateDiffuseContrib(float3 L, float3 N, float4 C) {
	float NL = max(0, dot(N, L));
	return C * NL;

Since the only real factor here is the fragment normal (the light direction should be correct and is calculated as normalize(lightPos - fragmentPos), where both are in world space) I imagine the problem would have to be somehow related to that.

I am using normal mapping in the above screenshots, but even when using interpolated vertex normals the results are the same.


Is this a common occurence, or may it even be that this is how it is supposed to look?

Thanks in advance for any illuminating (wink.png) replies!

Problems with tangent to world space conversion

28 March 2015 - 09:53 AM

I added normal mapping to my project a while back, which seemed to work as intended so I didn't think much about it until now that I tried to also add specular mapping. This did not look right at all and after some debugging it seems that my normals (sampled from a normal map) don't get converted into world space properly after all.


So here is how I'm reading the normals:

float3x3 matIT = transpose(float3x3(normalize(IN.tangent), normalize(IN.binormal), normalize(IN.normal));
float3  normal = NormalMap.Sample(DSampler, IN.texcoord).xyz * 2 - 1; // Convert to -1 .. +1 range
normal = normalize(normal.x * matIT[0] + normal.y * matIT[1] + normal.z * matIT[2]); // Bring into world space

The IN.normal, IN.tangent and IN.binormal come from the per-vertex input data and are multiplied by the rendered mesh's world matrix prior to being sent to the pixel shader (where the above calculations occur).



The odd thing is that this does yield properly looking normal mapping results, but when I try to use these normals for specular highlights it is off.

Rendering the output normal vector of a spinning cube further suggests that for example the vertices facing upwards (after rotation by mutation of its world matrix) will have the correct, upwards facing normal direction for 4 faces but not all 6 as I can only imagine it should if this was working properly.

The fact that the cube normals do indeed change when rotating (albeit not always correctly) suggests that it shouldn't be a mere oversight of the above actually transforming the normal into object space instead of world space either.

It is times like these I regret I didn't take some extra math courses when I had the chance... 

Any help or pointers would be greatly appreciated. smile.png

Simplifying Assimp 3.1-imported FBX scenes

27 February 2015 - 04:48 AM

I've recently updated to version 3.1 of the Assimp library for importing 3D models into my game engine.

With this came support for importing FBX files which is nice, however the FBXImporter attempts to retain a lot of separate settings that are present in the file but that do not have any direct correspondence in the aiNode / aiNodeAnimation structs. This is then achieved by creating a lot of extraneous nodes for carrying static transforms for things like pivots and offsets to all the separate scaling, rotation and translation steps. Naturally this creates a pretty overly complicated node hierarchy that both uses a considerable amount more memory and is slower to evaluate for animation purposes. The additional transforms won't be changed by my application at all so I don't need to retain them if they could be transformed into a more concise representation. I thought about just having these offset matrices being part of my Node class but apparently there are potentially up to 10+ of them, so that would be a big waste considering most of them would be set to the identity matrix at most times anyway.


As such I have been thinking about trying to tidy it up a bit by preprocessing these transforms into the animation channels, however I am not entirely sure whether this is feasible. Can you even transform things like quaternions and scale vectors (which are stored as vectors instead of matrices in order to facilitate interpolation) by matrices?

If anybody have some pointers on where to start with this kind of undertaking I would greatly appreciate hearing them.

Per-mesh bone animation with Assimp?

18 January 2015 - 08:04 AM

Apparently the assimp library specifies bone data on a per-mesh basis where scenes (models) are imported with one or more meshes.

In my importing routine I just ignore any bone data that is defined for more than one single mesh (ie. two meshes refer to a bone with the same name, I'll just read it from the first mesh where it occurs and ignore the other one). May I be shooting myself in the foot here due to there actually being circumstances where two meshes may have different animations for the same bone / joint? It doesn't seem likely to me (never mind the case where two completely separate objects imported from the same scene have bones sharing a name) but then why is it stored like that...?


I just noticed that going over my model importing code that I wrote about a year ago so it might just be something I'm not remembering mind you, such that it is there because each bone has vertex weighing that differs for each mesh, but it doesn't seem that is stored along with the bone data?


Edit: oh hah, seems that last thing I wrote is exactly it, rubber ducking for the win huh? xD

Still so, the posed question of whether animation data may be different on a per-mesh basis is still interesting, can anyone with 100% certainty say that this will never be the case?

What kind of performance to expect from real-time particle sorting?

23 September 2014 - 04:33 PM

This is mostly a theoretical question.

In the recent days I have implemented a simple GPU-driven particle system for use in my engine and after finally tracking down some annoyingly elusive bugs it is running quite well.

However I realized that sorting the individual particles so that they are drawn back-to-front (for proper alpha blending) seems to take significant processing time compared to just generating, updating and drawing the individual particles.

For example I get a frame rate drop of about 35 times if I compare running some 8000 particles with and without depth sorting (I get ~2100 FPS witout it and ~55 with on a relatively high-end GPU).

Is this to be expected due to way in which sorting has to be done on the GPU; eventually you need a single thread to traverse the whole list to ensure everything is in proper order? Or is there some kind of in-place sorting algorithm that can have separate threads work on separate sub-sections of the particle list without having to have a final step that puts all the pieces together? I have been unable to think of one.


My sorting algorithm is a pretty straight forward mergesort implementation; I first perform a swap-sort to bring sub-lists of two elements in order, then I proceed by calling a MergeSort shader program that merges two adjacent pre-sorted sub-lists 1 through log2(numParticles) times where the sub-list size goes from 2 and doubles for each step. This obviously means I have lots of threads doing little work in the first passes and the finally just one thread comparing all elements in the entire buffer at the end.



Ideally I would of course want to be able to perform the sorting in a quicker way (I suppose the key to this is better work distribution but as said I'm having a hard time figuring out how to accomplish that). If that is indeed not possible I guess I'll have to tinker with sorting parts of the buffer in each frame, but that can of course permit the ocassional artifact to slip through.