# [MDX] Normal mapping clouds, generating a normal map on GPU [SOLVED, shader inside!]

This topic is 4797 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

##### Share on other sites
There is an absolutely BEAUTIFUL journal you should view for info about clouds:

Journal of Ysaneya

##### Share on other sites
Quote:
 Original post by CaitlinThere is an absolutely BEAUTIFUL journal you should view for info about clouds:Journal of Ysaneya

I know, that journal is absolutely inspirational. The hours I spent gaping at those graphics... But I did see that Ysaneya had a problem similar to mine for his terrain shading (terrain texturing, on page 2) and that he picked a roughly similar (though more advanced) solution. So hopefully he can post some comments, pointers, *shaders*, or whatever :)

##### Share on other sites
Ok, forget about the asynchronous approach. Accessing an online texture from another thread is a sure way to get yourself a good ol' BSOD. Since the textures may or may not be in use on the device for rendering when computing the normal map (yay, multithreading), I had to use two offline textures that are used exclusively for the computation to get it anything near working. However, the overhead of copying the textures back and forth between the CPU and the device dropped the framerate to 200.

Guess I'm gonna give the shader a try to compute the normal maps on the GPU as well. If anyone cares to comment on the asynchronous approach, or rather suggest how it actually could work efficiently, please feel free to post :)

##### Share on other sites
I've tried implementing a shader that will take a height map and compute a normal map from it. The performance results look promising, as I can generate a new normal map every frame on the GPU with about the same framerate when I was generating 4 maps per second on the CPU.

But the normal maps from the shader give some serious artifacts, as shown in the picture below. The resulting map looks too sharp and it has some 'jumpy pixels', artifacts similar to rigid JPEG encoding that shift every frame, which gives a very interesting, yet unwanted results.

And some sample heightmap & corresponding normal maps (the CPU normal map is what I hope to achieve, even though it looks a bit dull):

I'm supplying the cloud density map as a height map to the shader and tell it the width of the map to compute dU and dV for obtaining the sample points, as described in the last picture in the topic start. I use the shader in a RenderToSurface pass to render a pre-transformed quad (with the same size as the heightmap, for pixel-perfect sampling) onto the normal map texture. You can find the code for the shader below:

[source lang=hlsl]float HeightMapSize;  texture HeightMap;sampler HeightMapSampler = sampler_state{    Texture = <HeightMap>;    MinFilter = Linear;    MagFilter = Linear;    AddressU = Clamp;    AddressV = Clamp;};  //application to vertex structurestruct a2v{     float4 position   : POSITION0;    float2 tex0       : TEXCOORD0;};   //vertex to pixel shader structurestruct v2p{        float4 position   : POSITION0;    float2 tex0        : TEXCOORD0;};   //pixel shader to screenstruct p2f{    float4 color    : COLOR0;};  void ps( in v2p IN,  out p2f OUT){		float dU = 1 / HeightMapSize;  	float s0 = tex2D(HeightMapSampler, IN.tex0).r;		       		float s1 = tex2D(HeightMapSampler, float2(IN.tex0.x - dU, IN.tex0.y)).r;		       		float s2 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y - dU)).r;		       		float s3 = tex2D(HeightMapSampler, float2(IN.tex0.x + dU, IN.tex0.y)).r;		       		float s4 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y + dU)).r;	  	        		float3 v1 = float3( -dU, 0, s1 - s0);	float3 v2 = float3( 0, -dU, s2 - s0);	float3 v3 = float3( dU, 0, s3 - s0);	float3 v4 = float3( 0, dU, s4 - s0);	       	float3 n1 = normalize(cross( v1, v2 ));		float3 n2 = normalize(cross( v2, v3 ));	float3 n3 = normalize(cross( v3, v4 ));	float3 n4 = normalize(cross( v4, v1 ));	  	float3 n = normalize(n1 + n2 + n3 + n4);		  	OUT.color = float4( (n.x + 1) / 2, (n.y + 1) / 2, (n.z + 1) / 2, 1);	};  void vs( in a2v IN, out v2p OUT ){           OUT.position = IN.position;    OUT.tex0 = IN.tex0;	}   //--------------------------------------------------------------------------------------// Techniques//--------------------------------------------------------------------------------------technique NormalMapComputation{    pass P0    {                   VertexShader = compile vs_1_1 vs();                   PixelShader  = compile ps_2_0 ps();        }}

At first I though the problem was coming from the normal encoding, so I used various texture formats for the normal map (up to ARGB32f) but that didn't help at all. So I guess there's something wrong with the normal computing shader, as the normal maps generated on the CPU do give the correct results. If someone has any idea on how to fix this, please let me know because I've been tinkering with the code for a few hours now without any result.

Does anyone see what I'm missing in the shader? Or could the artifacts perhaps be caused by the quad rendering pass? Doesn't anyone have some sample code to generate a normal map from a heightmap (in any language!), so I can check if my code is missing something?

Well, thanks again... *crosses fingers* :)

##### Share on other sites
To answer one of your original questions, TextureLoader.ComputeNormalMap works on the CPU. It will basically lock your original texture and compute the normal map.

I think your shader is theoretically correct. You've taken the derivative at each pixel (which is just a difference) and used those vectors to compute the final normal value. But as you've said, the normal map may look a bit harsh. Most CPU implementations offer some kind of extra parameter to soften the normal map a bit. I'm not exactly sure how it works but if you search for details on how to manually generate normal maps, you will probably find some info on how to do it.

neneboricua

##### Share on other sites
Thanks for your reply. I did some more searching, but I can't find anything on how to 'soften' the GPU generated normal map. I did find some sourcecode from openscenegraph.org, which uses the exact same approach as I used in my shader. But I can't help wondering if the shader is 100% correct, as the normal map seems to have an unusual amount of white, instead of blue.

Anyway, I went with generating the normal map on the CPU for now and I got the lighting of the sky just about to my liking. I'm using a point light to similate the sun's effect on the clouds and some gradients on the sky dome behind the clouds. Here are some sample shots:

(dawn | noon | sunset | night)

##### Share on other sites
If you want to soften the normal, simply scale the normal. v * 0.75f, etc...
you could also look into blurring...

##### Share on other sites
I'm already blurring the height map before it is sent to the shader, to prevent hard edges, so that shouldn't be the problem. I tried scaling the normal by various scalars (down to 0.1), but that only makes the normal map look more gray and not more blue, as you'd expect in a typical normal map.

I'm really thinking I got something fundamentally wrong with building the normal map, but I don't see it. The normals are computed correctly, so I guess there's something wrong with the interpretation/encoding of the normals. The encoding looks ok though, since I'm using the exact opposite steps to 'unpack' the normals... I read a lot of stuff about the normal map representing %-left and such, but that's essentially the same as what I'm doing, no?

I also ran into another problem when I was testing scaling the normal, namely that the shader above already uses 64 instruction slots. According to the specs, my X850PE should be able to handle 65.280 instructions, but the pixel shaders refuse to compile against the ps_2_x target. How on earth can I use the remaining 65.216 instruction slots on the X850 then?!? It will accept compilation to ps_3_0, but then the shaders don't do anything...

Thanks again for any help :)

##### Share on other sites
Here's how I would compute the normal map :

float dU = 1 / HeightMapSize;float s1 = tex2D(HeightMapSampler, float2(IN.tex0.x - dU, IN.tex0.y)).r;float s2 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y - dU)).r;float s3 = tex2D(HeightMapSampler, float2(IN.tex0.x + dU, IN.tex0.y)).r;float s4 = tex2D(HeightMapSampler, float2(IN.tex0.x, IN.tex0.y + dU)).r;	  	float  coef   = 1.0f;    // change this value to soften / harden the normal mapfloat3 normal = float3((s1 - s3) * coef, 2.0f, (s2 - s4) * coef);normal = normalize(normal);

I'm too tired to explain into detail, but basically, for the pixel (x, y) you take the vector ((x - 1, y)->(x + 1, y)) and the vector ((x, y - 1)->(x, y + 1)) and do the cross product. Since both vector are aligned on axis, it simplifies to the equation I used in the shader.

The coef value correspond to the "height", so by tweaking the coef value, you will be able to achieve smooth normal maps as the one you've got from the CPU.

Edit : wooops, forgot to mentioned : this is what I used for a heightmap so the normal are pointing upward. For your sky, it's the inverse of the heightmap, so you should make the Y value negative (-2.0f instead of 2.0f ^^)

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 17
• 14
• 10
• 9
• 11
• ### Forum Statistics

• Total Topics
634097
• Total Posts
3015505
×