• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By fs1
      I have been trying to see how the ID3DInclude, and how its methods Open and Close work.
      I would like to add a custom path for the D3DCompile function to search for some of my includes.
      I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files.
      Thanks
    • By stale
      I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. 

      The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane).

      Here is my pixel shader. As mentioned, I simply hard code it to output white:
      float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
    • By evelyn4you
      Hello,
      i try to implement voxel cone tracing in my game engine.
      I have read many publications about this, but some crucial portions are still not clear to me.
      At first step i try to emplement the easiest "poor mans" method
      a.  my test scene "Sponza Atrium" is voxelized completetly in a static voxel grid 128^3 ( structured buffer contains albedo)
      b. i dont care about "conservative rasterization" and dont use any sparse voxel access structure
      c. every voxel does have the same color for every side ( top, bottom, front .. )
      d.  one directional light injects light to the voxels ( another stuctured buffer )
      I will try to say what i think is correct ( please correct me )
      GI lighting a given vertecie  in a ideal method
      A.  we would shoot many ( e.g. 1000 ) rays in the half hemisphere which is oriented according to the normal of that vertecie
      B.  we would take into account every occluder ( which is very much work load) and sample the color from the hit point.
      C. according to the angle between ray and the vertecie normal we would weigth ( cosin ) the color and sum up all samples and devide by the count of rays
      Voxel GI lighting
      In priciple we want to do the same thing with our voxel structure.
      Even if we would know where the correct hit points of the vertecie are we would have the task to calculate the weighted sum of many voxels.
      Saving time for weighted summing up of colors of each voxel
      To save the time for weighted summing up of colors of each voxel we build bricks or clusters.
      Every 8 neigbour voxels make a "cluster voxel" of level 1, ( this is done recursively for many levels ).
      The color of a side of a "cluster voxel" is the average of the colors of the four containing voxels sides with the same orientation.

      After having done this we can sample the far away parts just by sampling the coresponding "cluster voxel with the coresponding level" and get the summed up color.
      Actually this process is done be mip mapping a texture that contains the colors of the voxels which places the color of the neighbouring voxels also near by in the texture.
      Cone tracing, howto ??
      Here my understanding is confus ?? How is the voxel structure efficiently traced.
      I simply cannot understand how the occlusion problem is fastly solved so that we know which single voxel or "cluster voxel" of which level we have to sample.
      Supposed,  i am in a dark room that is filled with many boxes of different kind of sizes an i have a pocket lamp e.g. with a pyramid formed light cone
      - i would see some single voxels near or far
      - i would also see many different kind of boxes "clustered voxels" of different sizes which are partly occluded
      How do i make a weighted sum of this ligting area ??
      e.g. if i want to sample a "clustered voxel level 4" i have to take into account how much per cent of the area of this "clustered voxel" is occluded.
      Please be patient with me, i really try to understand but maybe i need some more explanation than others
      best regards evelyn
       
       
    • By Endemoniada

      Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:
       
      // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;  
      That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
      If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
      I had this working in the past but I can't find my old code
      Please help. Thank you.
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 Question about nDotL across LOD levels

This topic is 985 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I know I've been spamming a bit the forums, but please bare with me.

 

I have this old DX9 CPU based terrain LOD system and I'm updating it to a modern DX11 GPU based one. Progress has been slow but very fun, since I get to replace hundreds of lines of code doing expensive CPU LOD operations with a few GPU lines and I also get a performance boost out of it.

 

I am not going to talk a bout terrain height across LOD and morphing because these questions have fairly solid answers in my head.

 

It is about lighting. Even with physically based rendering, the good old cosine factor, the nDotL plays a huge role since we multiply by it. So I want to get it right, but I am getting some weird results as I move across LOD/MIP levels in my normal map generated based on the height map.

 

Supposing that we have a sampling rate of 1, meaning for NxN vertices we have NxN texels in our height map and a direction of float3(0.80, -0.40, 0.0), I have a first question.

 

1. Is the following output of nDotL correct, useful for physically based rendering and must be used as a general guideline across all LOD levels, meaning that whatever the sampling rate, your nDotL should roughly follow this curve?

 

[attachment=28526:ndotl1.png]

 

This looks correct for a low resolution (128x128) input texture.

 

But if I double the height map and normal map resolution, but adjust the sample rate (half it) so that the amount of vertices remains the same, I get this result, which changes the the nDotL curve:

 

[attachment=28527:ndotl2.png]

 

This because I am creating the normal map based on sampling of 4 points in the height map:

float4 GenNM(VertexShaderOutput input) : SV_TARGET {
	float ps = 1 / size;
	//ps *= size / 128;

	float3 n;
	n.x = permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x - ps, input.texCoord.y, 0, 0), 0).x * 30 - 
			permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x + ps, input.texCoord.y, 0, 0), 0).x  * 30;
	n.z = -(permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x, input.texCoord.y - ps, 0, 0), 0).x * 30 - 
			permTexture2d.SampleLevel(permSampler2d, float4(input.texCoord.x, input.texCoord.y + ps, 0, 0), 0).x * 30);
	n.y = 2;
	n = normalize(n);
	n = n * 0.5 + 0.5;	

	return float4(n, 1);
}

permTexture2d is the wrongly named heightMapTexture and 30 is the world scale. When I double the height map resolution, it becomes finer and the height points closer, so some of the overall curve of the lower resolution height map is lost, rather than adding detail to it. Am I understanding this right?

 

Doubling the height and normal resolution again and halving the sample rate so that we have the same amount of vertices I get this result:

 

[attachment=28528:ndotl3.png]

 

By the time I get to the desired resolution, the normals become quite flat and so does the lighting.

 

The reason for generating a high resolution map is that this is the input for LOD 0 close to the camera terrain. LOD 0 is rendered using a lot of vertices. As I move away from the camera, I start using less vertices, spaced further apart. The final LOD will have the 128x128 vertices, like in the first low resolution screenshot.

 

So the next question is:

 

2. How should the normal map on full resolution look? More like the one in the first screenshot, but with more detail, or more like in the last screenshot, flat.

 

3. How consistent should be the various MIP levels of the normal map as I go down in resolution?

 

Share this post


Link to post
Share on other sites
Advertisement

(a) Is the following output of nDotL correct, useful for physically based rendering

and must be used as a general guideline across all LOD levels, meaning that

(b) whatever the sampling rate, your nDotL should roughly follow this curve?

(a) n dot l is a convincing approximation, it has no physical basis as far as I remember, it only looks arguably convincing.

 

(b) I have no idea what curve you're talking about. What I see seems to be a normal map... in world space I assume. It seems convincing. Yes, in theory you should try to stick close to it.

 

 

 

But if I double the height map and normal map resolution, but adjust the sample rate (half it) so that the amount of vertices remains the same, I get this result, which changes the the nDotL curve:

Of course it does. You're sampling different points, you get different results. When it comes to terrains, you don't sample them at some random interval you decide: you sample them at native resolution, stepping across adjacent samples. If you have some interpolation method you might think about super-sampling but that's backwards. The heightmaps from which you pull normals must be the highest resolution you have and then eventually bake them to a normalmap.

 

 

How should the normal map on full resolution look? More like the one in the first screenshot, but with more detail, or more like in the last screenshot, flat.

You are not going to figure it out with a test set like the one you're using. Pull in a special test case, you will find the correct result should be pretty trivial to identify. Besides, artistic decisions might apply.

 

 

How consistent should be the various MIP levels of the normal map as I go down in resolution?

Good question. In my experience you can have quite some variation if you use at least trilinear. What I did however was to re-compute all normals from the mipmapped heightmap. Mipmapping normals does not sound so convincing to me. Note however I did those tests for generic bumpmapping. Following discussion.

Share this post


Link to post
Share on other sites

Thanks for the input Krohm!

 

 

(b) I have no idea what curve you're talking about. What I see seems to be a normal map... in world space I assume. It seems convincing. Yes, in theory you should try to stick close to it.

 

Of course it does. You're sampling different points, you get different results. When it comes to terrains, you don't sample them at some random interval you decide: you sample them at native resolution, stepping across adjacent samples. If you have some interpolation method you might think about super-sampling but that's backwards. The heightmaps from which you pull normals must be the highest resolution you have and then eventually bake them to a normalmap.

 

I'm talking about the general shape of the curve the nDotL goes from 1 to 0 based on sampling rate.

 

I believe too that in theory you should use you maximum resolution/LOD0 height map to get the normal map. But I do not like the visual result I get when I build it at LOD0. Maybe I am building it wrong! The results are worse and worse as I increase resolution. Here are the results for 4096x4096:

 

[attachment=28529:nn01.png]

 

Maybe it is correct, but I do not think so.

 

If I go to LOD2 (4 time lower size), I get a bit of normals:

 

[attachment=28530:nn02.png]

 

Going to LOD4, the second to last lowest in quality, I get this result:

 

[attachment=28531:nn03.png]

 

I decided to try some things out. Here is a normal LOD5, the lowest shot, with regular normals:

 

[attachment=28532:nn04.png]

 

And here is shot that uses a high resolution normal map corresponding to LOD0, only using some physically very unsound blending a normals:

 

[attachment=28533:nn05.png]

 

I need to try some more physically sound blending.

 

I have no idea yet which direction to follow. More like the first screenshot or more like the last or next to last? Artistically I like the last ones.

 

 

 

You are not going to figure it out with a test set like the one you're using. Pull in a special test case, you will find the correct result should be pretty trivial to identify. Besides, artistic decisions might apply.

 

What kind of special test case do you have in mind?

Share this post


Link to post
Share on other sites

(a) n dot l is a convincing approximation, it has no physical basis as far as I remember, it only looks arguably convincing.

It is not only an approximation. n dot l is basically the cosine of the angle between the normal of the surface and the viewing ray and it is the factor by wich the area gets smaller when projected to the plane perpendicular to the light ray. Here a quick sketch:
[attachment=28534:ndotl.png]

Where A' = n dot l * A (Where A and A' denote the area before and after projection)

So it basically takes into account how the light rays are spread out when they hit the surface with a greater angle.

 

I'm not really sure about what cure you are talking, but have you verified that the normal vector is correct and normalized? (Even if it's normalized in the vertex shader does not mean that it will be normalized in the pixel shader due to interpolation.)

Edited by Simmie

Share this post


Link to post
Share on other sites

So I tried two blending methods for the highest detail LOD based loosely on more physically sound operations, and got these two attached results.

 

Man, rendering...

 

I'll go with 5 from my previous reply for stylized look and with 7from this reply for "realistic" look for now until I can shed more light on the problem.

Share this post


Link to post
Share on other sites

There's a good point that was raised but didn't get attention: You need to normalize the normals.

 

Sampling a normal from a bilinear or trilinear fetch won't result in a normalized normal, even if the pixels themselves are normalized.

If you sample right in the middle between (-0.70711; 0.70711) and (0.70711; 0.70711) the interpolated normal will be (0; 0.70711)  which is not normalized. The correct, normalized result is (0; 1)

 

As a side note, it looks like you're not doing gamma-correct rendering. Try rendering to sRGB render targets.

Share this post


Link to post
Share on other sites


I'm not really sure about what cure you are talking, but have you verified that the normal vector is correct and normalized? (Even if it's normalized in the vertex shader does not mean that it will be normalized in the pixel shader due to interpolation.)

 

There's a good point that was raised but didn't get attention: You need to normalize the normals.

 

Sampling a normal from a bilinear or trilinear fetch won't result in a normalized normal, even if the pixels themselves are normalized.

If you sample right in the middle between (-0.70711; 0.70711) and (0.70711; 0.70711) the interpolated normal will be (0; 0.70711)  which is not normalized. The correct, normalized result is (0; 1)

 

As a side note, it looks like you're not doing gamma-correct rendering. Try rendering to sRGB render targets.

 

The normal are normalized, in the PS too.

 

And I am using gamma correct rendering. The debug maps you see on the left are all ran though a pixel shader. The height map is made more human readable by coloring terrain above water gray and bellow blue and the normal map is rendered in "fake non sRGB mode".

 

Universally used gamma correct rendering is fairly new, and in the past people would just output their linear normals from the shaders to the screen and I have gotten used to the look of normal maps like that, displayed wrong. So I wrote a little pixel shader to fake that look on a gamma corrected renderer.

Share this post


Link to post
Share on other sites

Your normal calculation is the problem. You don't take account that when you are using smaller texture size each there are bigger variance withing texels. You need to take this account. 

 

Just think about 45degree slope. With 256x256 texture each texel just has 1/255 difference. But with 2x2 texture difference would be 255/255.

Share this post


Link to post
Share on other sites

Yes, it was the normal calculation. I inherited the code form the CPU version so I guess that one was bad too. I eventually settled on a Sobel operator:

float4 PSNHeightToNormal(float4 inPos : SV_POSITION, 
                         float2 inTex : TEXCOORD0) : SV_TARGET {
	float ps = 1 / size;

	float3 n;
	float scale = worldScale;

	n.x = -(h(inTex,  ps,  ps) - h(inTex, -ps, ps) + 2 * (h(inTex, ps, 0)  - h(inTex, -ps, 0))  + h(inTex, ps, -ps) - h(inTex, -ps, -ps));
	n.y = -(h(inTex, -ps, -ps) - h(inTex, -ps, ps) + 2 * (h(inTex, 0, -ps) - h(inTex,  0, -ps)) + h(inTex, ps, -ps) - h(inTex,  ps,  ps));
	n.z = 1 / scale;

	n = normalize(n);
	
	n = n * 0.5 + 0.5;

	return float4(n.x, n.z, n.y, 1);
}

Hopefully this one works as expected. I still need to test it a bit because I am having a bit of a brain fart since the switch from RH to LH. I have no longer of an intuitive concept of "forward" and I need to contentiously convert in my head from one system to another :).

 

There is still a bunch of things to decide regarding how to interpret data for LOD transition and if to use mipmaps or just secondary lower resolution textures.

 

Is there a way to control how DeviceContext->GenerateMips works? What filters it uses? I couldn't find anything.

 

Additionally, since for LOD I am generating every chunk separately using noise, I have reintroduced the issue of TSeams at chunk borders...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement