DirectX 11, using Tessellation & Geometry shader in a single pass

Started by
16 comments, last by Jason Z 10 years, 9 months ago

Things, like you describe (one pass), having vertices noise applied in the domain shader is what is working now !

The problem I was concerned (first post), was how to compute the normals with this method....

It's working to generate some noise points close to the base vertex, and compute the normal.. But it's a lot of GPU work... And not very well solution...

I wanted (at start) compute the normals with the geometry shader..

But unfortunatly, it is not possible in the same pass because of the output not compatible of the domain shader (cannot output triangle_adj).

My ideas was :

A- First pass for tessellation, Second pass (with TRIANGLE_ADJ) for noise in vertex shader and computing the normals in the geometry shader

B - Like A but using the compute shader for noise and normals after the tessellation (and a third pass for pixel shader)

I find not "clean" to have to compute more noise points juste to have one normal.. As if I render in multiple passes, the noise is computed only one time per vertex and I could use vertex adj to compute the normals..

I'm going crazy ;) Not so easy ;)

What you think about these normals problem ?

Advertisement

How about having a pre-calculated noise normal function? Using the same method that you use for a scalar value in the noise function now, you can expand that to use a vector value field instead. Then the noise lookup is essentially the same 'cost' as the regular noise lookup. That is probably how I would approach the problem...

Regarding your options, I think you won't be any better off doing it in two passes instead of one. You can output a triangle list to your geometry shader, which means you will have one face to calculate a normal vector from. However, if you do a first pass for tessellation and then calculate the normal in the second pass, you still only know about one face per vertex - there is no way to get the adjacency information back. So you would be just as good off to use the single pass with a geometry shader working on the face normal.


How about having a pre-calculated noise normal function? Using the same method that you use for a scalar value in the noise function now, you can expand that to use a vector value field instead. Then the noise lookup is essentially the same 'cost' as the regular noise lookup. That is probably how I would approach the problem...

Oh oh.. Very intersting.. I never think about that... I must investigate how to do that (do you have some links ?).. But your approah seems to be the best !! Thank you very much ;)


Regarding your options, I think you won't be any better off doing it in two passes instead of one. You can output a triangle list to your geometry shader, which means you will have one face to calculate a normal vector from. However, if you do a first pass for tessellation and then calculate the normal in the second pass, you still only know about one face per vertex - there is no way to get the adjacency information back. So you would be just as good off to use the single pass with a geometry shader working on the face normal.

And if I cannot have adjacency informations in the second pass.. I don't have much choices...

Can I ask you, if you were working on the same project, what "solution" you think you could use ? Big steps, not details ;)

For info, it's a personal project only, i'm not expert in graphics development ;)

I would go for the pre-calculated solution. That would let you do as complicated of algorithm as you could possibly want to generate the normal vectors from your noise function, and then store the noise value and the normal vector together in your noise structure. Then when you do the perlin lookup, just lookup the 4-component value and normalize the normal vector portion of the values.

I think that would simplify the process, keep the number of noise lookups down, and let you work in a single pass - which should handle all of your requirements! The only work is to pre-calculate the normal vectors and ensure that you won't run into any situations where you end up with a <0,0,0> value from your routine.

I never talked about my other goals with this project... It's a lot of work, I know, but I'll try ;)

One pass will never be sufficient for all my needs..

- GPU terrain rendering with "unlimited" details (from planet scale to one meter)

- Terrain self shadowing from directionnal light (pretty results with shadow map and blure now)

- Ability to hand modify the terrain, store and re-use the modifications (no ideas for the moment...)

- Terrain logics, featuring differents noise parameters to have mountains, oceans, etc....

- Terrain logics, rivers and other stuffs...

- Collision detection (perhaps in GPU with compute shader)

- ...

Before, I made a project like this, but with flat, limited, pre-computed map.. Not spherical, not noise in GPU...

I know that I cannot have a real-time algo for all this stuff, there will be some pre-computed data (like a logical map with zones, river, etc.).

My actual graphic engine is sufficient for tests and going forward.. I miss the global logic/architecture now.. It's more a brain problem, not technic ;)

Perhaps, do you have some suggestions, ideas for generals steps ?

Thank you again for your time and your help !!

PS : I tried to calculate the normal in the noise algo, using derivatives.. Not so easy but I have some results.. But not good enough, I think I'm missing something...

Perhaps, for other needs, I will use the compute shader and so, use it to compute correctly the normals...

I haven't done any larger scale terrain renderers, although it does seem to be a popular topic. You might be interested in checking out the tiled resources in D3D11.2, as they could help you with texture variation over the surface of the planet.

One thing that comes to mind is that you should generate the geometry used for rendering in caches. I presume your geometry will change based on the location of your camera, so if your camera stays in one general area then you should build the geometry and then cache it if possible to minimize the time used to build it up. This would play nicely with your ideas of direct modification and re-saving too. And you would also be able to use the GPU with stream output for optimized computation.

If you have any screen shots I would love to see how it looks now!

Again, thank you ;) Interesting feature with D3D11.2.. I just need to move on Windows 8 !!

Because of the LOD (tessellation, noise lookups, etc.), the geometry will change very often.. But it could be very efficient to limit the world building when not necessary..

I must work on it ;)

Except an old screen from when I worked on shadows, my project in divised in several branches, where I test separatly (with specifics geometries) the differents parts of the main project. So I don't have, for the moment, good lookings screenshots.. Perhaps I will open a little blog when the project will be more advanced ;)

The graphic engine is in a good way, but there is so work to do with the world builder (and not so time)...

oldhouse.jpg

That's always the case - more features to add than time permits ;) That screenshot looks pretty good in any case, so keep up the good work! You can always start a development journal here on gamedev.net if it would be useful for your purposes...

This topic is closed to new replies.

Advertisement