Sign in to follow this  
Hyunkel

GPU normal vector generation from heightmaps for non-horizontal terrain

Recommended Posts

Hyunkel    401
I'm working on my planetary renderer, which is currently calculating vertex positions, displacement and normal vectors on the cpu.
Given the scale I'm working at, this is quite slow. (about 250ms on an i7-2600k per update for high lod levels)

I want to attempt to gradually move calculations over to the GPU, but I'm a bit puzzled as to how to generate my normal vectors if I do so.
On the GPU I'll only have access to the current non-displaced vertex position and a heightmap for the current terrain patch.
I have generated normal vectors on the GPU before using this technique: [url="http://www.catalinzima.com/2008/01/converting-displacement-maps-into-normal-maps/"]http://www.catalinzima.com/2008/01/converting-displacement-maps-into-normal-maps/[/url]
But unfortunately this technique and similar ones assume that the terrain is horizontal, which mine is not.

I thought about using a rotation matrix per patch to transform the generated normal vectors, but I can only see that working at very high levels of detail, where the (non-displaced) terrain is nearly flat.

Any ideas how I could do this?

Cheers,
Hyu

Share this post


Link to post
Share on other sites
Krohm    5030
What's the problem in always doing it in "terrain space" (similar to tangent) and then transforming?
I've been doing that in the past and I don't recall having problems with it.
Normals are a function of heightmap, the filter still works, albeit it delivers data in "terrain space".

Share this post


Link to post
Share on other sites
Hyunkel    401
My problem is that I do not know how to transform the generated normals into world space.
Since I'm generating planetary terrain, none of my terrain patches are flat, so a per-patch transform matrix would only be valid for the center vertex.
The error might be low enough at very high lod's to be unnoticeable, but it wouldn't work for lower lod's where the patches are strongly curved.

I guess that I could try to build an individual rotation matrix for each vertex in the vertex shader using the vertex position, but I haven't tried that yet.

Maybe I'm just approaching this issue incorrectly, I don't know.

Share this post


Link to post
Share on other sites
Jason Z    6434
I am assuming you are talking about generating a per-vertex normal vector, right? If so, then a simple way to create the normal vector is to use the neighboring vertices. For example, if you have vertex v00, with v10 to the right of it and v01 above it, then you would generate the normal vector as the cross product of the vector from v00 to each of its two neighbors:

( v10 - v00 ) x ( v01 - v00 )

This will work as long as the vertex positions that are used in the calculation are in a consistent coordinate space, and have the displacement already applied to them. This will generate fairly simple normal vectors, but will work as expected for any geometry.

If you want to use the higher quality generation methods (i.e. using more samples) then you just need to produce the vertices and carry out the sobel filter equivalent of the simple operation listed above. You simply have to work with the actual geometry and then generating the normal vectors is fairly easy.

Share this post


Link to post
Share on other sites
Hyunkel    401
Yes, I am talking about per-vertex normals.
It makes perfect sense that these techniques will create correct world space normals if I work with the actual (already displaced) terrain.

However, that would involve calculating adjacent vertex positions within the vertex shader right?
After all I cannot sample multiple vertices from the vertex buffer at once.

I mean, it's certainly possible, but my current terrain generation involves projecting a unit cube to a sphere using this technique:
[url="http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html"]http://mathproofs.bl...-to-sphere.html[/url]
The problem is that due to the scale I am working at, I currently perform this operation with double precision, which is an issue if I have to do it on the gpu.
It might be possible though, I should be able to avoid wobbly terrain by simply increasing the size of the planet, to avoid imprecision issues during camera projection.
If I do this, I might as well do all the calculations on the gpu though.

I'm beginning to think that this might not be the best approach though.
It would probably work a lot better if I did this with compute shaders right? (I am working with DX11 hardware)
I could calculate the world space normals for a terrain patch and store them in a texture to be used during rendering.
This way I only need to calculate the normals for each patch once for each lod change instead of once every frame.
In fact, if I go this way, I might as well use nothing but a single 32x32 grid and use hardware instancing to draw the entire planet which sounds like a good idea.
All I'd have to do on the CPU is calculate a world transform matrix for every patch, the height map, normal map and spherical projection could be handled on the gpu.


Then again, I've never used compute shaders before, in fact I know absolutely nothing about them.
It might be the perfect opportunity to learn something new though.
Unfortunately it seems to be rather difficult to find resources about compute shaders in relation to procedural geometry.
Then again, this looks very promising: [url="http://www.infinity-universe.com/Infinity/index.php?option=com_content&task=view&id=117&Itemid=26"]http://www.infinity-...d=117&Itemid=26[/url]

Any thoughts on the compute shader approach?

Cheers,
Hyu

Share this post


Link to post
Share on other sites
Jason Z    6434
I think the compute shader is the perfect place to do this. I actually had the geometry shader in mind when I wrote my first post, but the compute shader is probably a better choice. If you need some help getting started, the WaterSimulationI sample program from Hieroglyph 3 does something similar to what you want. It shows how to create the buffers for unordered access views, syntax for accessing the data, etc... There is also another approach for getting data from a buffer into a generic vertex in the ParticleStorm demo too - if you read through those two samples, I think you will have a decent start on using the CS.

My personal preference would be to compute the vertices for a patch, then keep them in memory (i.e. cache the results). You can't directly render from a compute shader, but that means it should be trivial to store the results in a buffer for every frame. Then you could always update a patch when the LOD requirements change. Plus with compute shaders you get to use the group shared memory, meaning you even can reduce the bandwidth needed to load the vertices for a thread group. If you search in my dev. journal there are some older posts about compute shaders and UAVs, but it sounds like you already know the basics of how they work.

I have to warn you though, once you start using compute shaders, you won't want to stop :)

Share this post


Link to post
Share on other sites
Hyunkel    401
[quote name='Jason Z' timestamp='1326782397' post='4903524']
I think the compute shader is the perfect place to do this. I actually had the geometry shader in mind when I wrote my first post, but the compute shader is probably a better choice. If you need some help getting started, the WaterSimulationI sample program from Hieroglyph 3 does something similar to what you want. It shows how to create the buffers for unordered access views, syntax for accessing the data, etc... There is also another approach for getting data from a buffer into a generic vertex in the ParticleStorm demo too - if you read through those two samples, I think you will have a decent start on using the CS.[/quote]

Great! This is exactly what I need to get started! :)

[quote name='Jason Z' timestamp='1326782397' post='4903524']My personal preference would be to compute the vertices for a patch, then keep them in memory (i.e. cache the results). You can't directly render from a compute shader, but that means it should be trivial to store the results in a buffer for every frame. Then you could always update a patch when the LOD requirements change. Plus with compute shaders you get to use the group shared memory, meaning you even can reduce the bandwidth needed to load the vertices for a thread group. If you search in my dev. journal there are some older posts about compute shaders and UAVs, but it sounds like you already know the basics of how they work.

I have to warn you though, once you start using compute shaders, you won't want to stop [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
[/quote]

This sounds very promising and encouraging, I'm definitely going with the compute shader approach.
Thanks for your help!

Cheers,
Hyu

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this