Jump to content
  • Advertisement
Sign in to follow this  
remigius

[MDX] Normal mapping clouds, generating a normal map on GPU [SOLVED, shader inside!]

This topic is 4644 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Edit: For the final shader for generating a normal map on the GPU, see my last reply. Hello, I've been working on a skybox with dynamic clouds, based on Kim Pallisters article. I'm using a shader and RenderToSurface to compose the noise octaves on the GPU and this technique is very fast. The clouds look nice, but a bit flat: From some articles here at GameDev I read that the clouds are typically shaded using some basic form of raytracing. I get the general idea, but I'm not quite sure on how to implement this at all, let alone efficiently. So I decided to go for bump mapping on the clouds to get that shaded look, which gives the added bonus that the sun's position is taken into account. This turned out like this (sun position set to around 3pm): I generate the normal map on the CPU using the cloud density map as a height field, with TextureLoader.ComputeNormalMap. It works, but it's quite slow. If I generate the normal map every frame, my framerate drops to 30, so for now I settled for recalculating the normal 4 times per second, which still has quite an impact on the framerate (about -400 fps, when you disregard the rendering overhead) and it occasionally produces some (minor) lighting artifacts. Now, there's three things I had in mind to solve this. I could simply forget about the shading, since I'm not 100% convinced it's worth the performance penalty. What do YOU think? The second approach would be to generate the normal map asynchronously on another thread, leveraging today's common hyperthreading/dual core to keep the framerate up. This approach would be the easiest real solution, but I don't know if it'll give any significant speed gains. Does anyone have any experience with this or something similar? And finally I was wondering, wouldn't it be possible to dynamically calculate the normal map on the GPU once I got the octaves composed? I'm not entirely sure about how to code it, but the approach would be to use a shader and RenderToSurface again to sample the 'height values' on the cloud density map, like this: That way I could calculate the surface normals of the adjacent triangles and use the to calculate the normal for S0. But still, it would be a bit difficult to code in order to get it just right in HLSL, especially with the sampling. Does anyone know how the DirectX method TextureLoader.ComputeNormalMap works internally? A code snippet on this would really help out. A complete shader that already does this is also very welcome, of course :) Well, thanks for bearing with me this long. If you have suggestions for alternatives or any other comments, please let's hear em. [Edited by - remigius on December 2, 2005 11:04:00 AM]

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Caitlin
There is an absolutely BEAUTIFUL journal you should view for info about clouds:

Journal of Ysaneya

I know, that journal is absolutely inspirational. The hours I spent gaping at those graphics... But I did see that Ysaneya had a problem similar to mine for his terrain shading (terrain texturing, on page 2) and that he picked a roughly similar (though more advanced) solution. So hopefully he can post some comments, pointers, *shaders*, or whatever :)

Share this post


Link to post
Share on other sites
Ok, forget about the asynchronous approach. Accessing an online texture from another thread is a sure way to get yourself a good ol' BSOD. Since the textures may or may not be in use on the device for rendering when computing the normal map (yay, multithreading), I had to use two offline textures that are used exclusively for the computation to get it anything near working. However, the overhead of copying the textures back and forth between the CPU and the device dropped the framerate to 200.

Guess I'm gonna give the shader a try to compute the normal maps on the GPU as well. If anyone cares to comment on the asynchronous approach, or rather suggest how it actually could work efficiently, please feel free to post :)

Share this post


Link to post
Share on other sites
I've tried implementing a shader that will take a height map and compute a normal map from it. The performance results look promising, as I can generate a new normal map every frame on the GPU with about the same framerate when I was generating 4 maps per second on the CPU.

But the normal maps from the shader give some serious artifacts, as shown in the picture below. The resulting map looks too sharp and it has some 'jumpy pixels', artifacts similar to rigid JPEG encoding that shift every frame, which gives a very interesting, yet unwanted results.



And some sample heightmap & corresponding normal maps (the CPU normal map is what I hope to achieve, even though it looks a bit dull):




I'm supplying the cloud density map as a height map to the shader and tell it the width of the map to compute dU and dV for obtaining the sample points, as described in the last picture in the topic start. I use the shader in a RenderToSurface pass to render a pre-transformed quad (with the same size as the heightmap, for pixel-perfect sampling) onto the normal map texture. You can find the code for the shader below:

[source lang=hlsl]
float HeightMapSize;

texture HeightMap;
sampler HeightMapSampler =
sampler_state
{
Texture = <HeightMap>;
MinFilter = Linear;
MagFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;

};

//application to vertex structure
struct a2v
{
float4 position : POSITION0;
float2 tex0 : TEXCOORD0;
};

//vertex to pixel shader structure
struct v2p
{
float4 position : POSITION0;
float2 tex0 : TEXCOORD0;
};

//pixel shader to screen
struct p2f
{
float4 color : COLOR0;
};

void ps( in v2p IN, out p2f OUT)
{
float dU = 1 / HeightMapSize;

float s0 = tex2D(HeightMapSampler, IN.tex0).r;
float s1 = tex2D(HeightMapSampler, float2(IN.tex0.x - dU, IN.tex0.y)).r;
float s2 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y - dU)).r;
float s3 = tex2D(HeightMapSampler, float2(IN.tex0.x + dU, IN.tex0.y)).r;
float s4 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y + dU)).r;
float3 v1 = float3( -dU, 0, s1 - s0);
float3 v2 = float3( 0, -dU, s2 - s0);
float3 v3 = float3( dU, 0, s3 - s0);
float3 v4 = float3( 0, dU, s4 - s0);

float3 n1 = normalize(cross( v1, v2 ));
float3 n2 = normalize(cross( v2, v3 ));
float3 n3 = normalize(cross( v3, v4 ));
float3 n4 = normalize(cross( v4, v1 ));

float3 n = normalize(n1 + n2 + n3 + n4);

OUT.color = float4( (n.x + 1) / 2, (n.y + 1) / 2, (n.z + 1) / 2, 1);
};

void vs( in a2v IN, out v2p OUT )
{
OUT.position = IN.position;
OUT.tex0 = IN.tex0;
}

//--------------------------------------------------------------------------------------
// Techniques
//--------------------------------------------------------------------------------------
technique NormalMapComputation
{
pass P0
{
VertexShader = compile vs_1_1 vs();
PixelShader = compile ps_2_0 ps();
}
}





At first I though the problem was coming from the normal encoding, so I used various texture formats for the normal map (up to ARGB32f) but that didn't help at all. So I guess there's something wrong with the normal computing shader, as the normal maps generated on the CPU do give the correct results. If someone has any idea on how to fix this, please let me know because I've been tinkering with the code for a few hours now without any result.

Does anyone see what I'm missing in the shader? Or could the artifacts perhaps be caused by the quad rendering pass? Doesn't anyone have some sample code to generate a normal map from a heightmap (in any language!), so I can check if my code is missing something?

Well, thanks again... *crosses fingers* :)

Share this post


Link to post
Share on other sites
To answer one of your original questions, TextureLoader.ComputeNormalMap works on the CPU. It will basically lock your original texture and compute the normal map.

I think your shader is theoretically correct. You've taken the derivative at each pixel (which is just a difference) and used those vectors to compute the final normal value. But as you've said, the normal map may look a bit harsh. Most CPU implementations offer some kind of extra parameter to soften the normal map a bit. I'm not exactly sure how it works but if you search for details on how to manually generate normal maps, you will probably find some info on how to do it.

neneboricua

Share this post


Link to post
Share on other sites
Thanks for your reply. I did some more searching, but I can't find anything on how to 'soften' the GPU generated normal map. I did find some sourcecode from openscenegraph.org, which uses the exact same approach as I used in my shader. But I can't help wondering if the shader is 100% correct, as the normal map seems to have an unusual amount of white, instead of blue.

Anyway, I went with generating the normal map on the CPU for now and I got the lighting of the sky just about to my liking. I'm using a point light to similate the sun's effect on the clouds and some gradients on the sky dome behind the clouds. Here are some sample shots:


(dawn | noon | sunset | night)

Share this post


Link to post
Share on other sites
If you want to soften the normal, simply scale the normal. v * 0.75f, etc...
you could also look into blurring...

Share this post


Link to post
Share on other sites
I'm already blurring the height map before it is sent to the shader, to prevent hard edges, so that shouldn't be the problem. I tried scaling the normal by various scalars (down to 0.1), but that only makes the normal map look more gray and not more blue, as you'd expect in a typical normal map.

I'm really thinking I got something fundamentally wrong with building the normal map, but I don't see it. The normals are computed correctly, so I guess there's something wrong with the interpretation/encoding of the normals. The encoding looks ok though, since I'm using the exact opposite steps to 'unpack' the normals... I read a lot of stuff about the normal map representing %-left and such, but that's essentially the same as what I'm doing, no?

I also ran into another problem when I was testing scaling the normal, namely that the shader above already uses 64 instruction slots. According to the specs, my X850PE should be able to handle 65.280 instructions, but the pixel shaders refuse to compile against the ps_2_x target. How on earth can I use the remaining 65.216 instruction slots on the X850 then?!? It will accept compilation to ps_3_0, but then the shaders don't do anything...

Thanks again for any help :)

Share this post


Link to post
Share on other sites
Here's how I would compute the normal map :


float dU = 1 / HeightMapSize;

float s1 = tex2D(HeightMapSampler, float2(IN.tex0.x - dU, IN.tex0.y)).r;
float s2 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y - dU)).r;
float s3 = tex2D(HeightMapSampler, float2(IN.tex0.x + dU, IN.tex0.y)).r;
float s4 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y + dU)).r;

float coef = 1.0f; // change this value to soften / harden the normal map

float3 normal = float3((s1 - s3) * coef, 2.0f, (s2 - s4) * coef);
normal = normalize(normal);





I'm too tired to explain into detail, but basically, for the pixel (x, y) you take the vector ((x - 1, y)->(x + 1, y)) and the vector ((x, y - 1)->(x, y + 1)) and do the cross product. Since both vector are aligned on axis, it simplifies to the equation I used in the shader.

The coef value correspond to the "height", so by tweaking the coef value, you will be able to achieve smooth normal maps as the one you've got from the CPU.

Edit : wooops, forgot to mentioned : this is what I used for a heightmap so the normal are pointing upward. For your sky, it's the inverse of the heightmap, so you should make the Y value negative (-2.0f instead of 2.0f ^^)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!