Jump to content

  • Log In with Google      Sign In   
  • Create Account


DirectX 11, using Tessellation & Geometry shader in a single pass


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 05 July 2013 - 12:34 AM

Hello ;)

 

Before all, sorry for my poor english !

 

With DirectX 11, i'm trying to create a random map full with GPU.

 

Using Hull shader stage, I'm managing LOD with tessellation.

Using Domain shader stage, I'm generating the map (based on perlin noise).

 

Now my goal, is to compute normals in the geometry shader (normal on vertex). For that, I must use vertex adjency, like geometry is capable of.

 

But here is the problem... For tessellation, my primitives must be D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST.

But for geometry shader with 6 vertices(triangle primitive and adjency), I must use : D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST_ADJ.

 

Think I'm missing something... It must be possible to tessellate and use the results in the geometry shader...

However, it's working with 3 points, but I cannot use the 3 others (they are 0.0, 0.0, 0.0)....

 

Thank you in advance for any help ;)


Edited by chrisendymion, 05 July 2013 - 12:35 AM.


Sponsor:

#2 unbird   Crossbones+   -  Reputation: 4816

Like
2Likes
Like

Posted 05 July 2013 - 01:08 AM

The domain shader can only (as of D3D 11.0) output points, lines and triangles - without adjacency information, so I don't think this is possible. Even if, I think it would get tricky at patch edges.

 

So: you need to do your normals in the domain shader.



#3 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 05 July 2013 - 03:23 AM

Thank you !

 

Do you think, what is the best (if possible) :

 

1- Computing normals in Domain Shader, which is slow because I must call the noise function multiple times (adjency vertex based on x offset/delta)

2- Using the stream output from Geometry Shader, and in the second pass, passing the buffer data with "TRIANGLE_ADJ" and computing normals in the Geometry shader

 

 

Or any other suggestion ?



#4 unbird   Crossbones+   -  Reputation: 4816

Like
1Likes
Like

Posted 05 July 2013 - 11:30 AM

There is no best wink.png . Only profiling will tell.

Some random thoughts:
 

1- Computing normals in Domain Shader, which is slow because I must call the noise function multiple times (adjency vertex based on x offset/delta)

One general advice to improve shader performance is to pull up calculations where possible (to hull shader or even vertex shader). In this case unlikely. How about precalculating the noise to a texture and use SampleLevel in the domain shader ? You could even prepare the normals this way, i.e. generate a normal map with a Sobel filter.

2- Using the stream output from Geometry Shader, and in the second pass, passing the buffer data with "TRIANGLE_ADJ" and computing normals in the Geometry shader

I expect pressure on the bandwidth at high tesselation. Also, I have no idea how to deal with the patch edges. Actually, I wonder if this is even so straightforward in the interior. Doesn't the domain shader just give you triangles ? Do we know in which order they come ? Maybe you need to output points anyway to get something sensible to work with.

#5 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 08 July 2013 - 12:17 AM

Thank you again for your help ;)
My goal here, is to have a fully GPU rendered map (spherical). Without "limit", LOD is managed by the tessellator, details by the noise loop (octaves and parameters).
So "in theory", you can go very close to the map without loosing quality & speed.
 
Because trying solutions is a lot of work, I'm searching the "probably" best way before testing.
 
You talked about texture... You mean, first pass, generating a texture, and after reading it in the second pass ?
It could be interesting.. 
 
What about :
 
First Pass
- Passing a precomputed spherical model to the GPU
- Using tessellator with camera position & direction for adding vertices (LOD)
- Streaming the buffer output from the geometry shader
 
OPTION A
Second Pass
- Using only the Compute Shader for computing noise and normals
(In this case, what about parallelism ? How to design the loop ?)
 
Third Pass
- Pixel shader for lightning
 
OPTION B
Second Pass
- Using Triangle_Adj as primitives
- Computing noise in vertex shader
- Computing normals in geometry shader
- Pixel shader for lightning
 
Or maybe, a totaly another solution ?
I don't care for the speed for medium-low hardware.. Only the quality and a good speed on high power GPU (GTX780).

Edited by chrisendymion, 08 July 2013 - 03:02 AM.


#6 Jason Z   Crossbones+   -  Reputation: 4683

Like
2Likes
Like

Posted 08 July 2013 - 05:13 AM

Why not do the noise lookup in your vertex shader?  That would allow you to grab the sample early in the pipeline, and then the interpolation between points after you tessellate could be based off of that early lookup.  This would effectively reduce your computational load, and if you choose your vertex density correctly then there will be very little difference between what you are proposing and doing it this way.

 

In fact, since Perlin noise is more or less an interpolation technique, you should be able to get away very easily with this type of approximation...



#7 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 08 July 2013 - 05:37 AM

Thank you Jason for your answer ;)

 

Actualy, I load a fixed icosphere to the vertex shader. Which hasn't lot of vertices... 

 

If we think about it as a planet, there wil be only 1 vertex for hundreds of kilometers (or miles :-)) !

 

So, if the noise is applied in this stage, the interpolation after the tesselator will be too large, no details...

I can load a much more detailed sphere, but, if the camera is very close, there will be the same problem, leak of details (or if far away, much more work for the GPU).. 

My goal is to have the same amount of vertices (and noise details), whether the camera is very very close or far away.

 

Sorry for my poor english, don't know if you understand what I mean... 


Edited by chrisendymion, 08 July 2013 - 05:41 AM.


#8 Jason Z   Crossbones+   -  Reputation: 4683

Like
1Likes
Like

Posted 08 July 2013 - 06:23 AM

Don't worry - your English is more than sufficient :)  I understand your current approach, but this is actually what I am talking about changing.  When you use only a single vertex for hundreds of kilometers of area, you are effectively not taking advantage of the vertex shader for any significant computation.  Instead of having a fixed resolution icosphere, why not start with a screen space set of vertices which would be generated based on the current view location?  That would let you put the resolution where it needs to be, and let your tessellation algorithms be more effective.  Remember, there is a limit to the amount of tessellation that can be achieved in one pass, so you can't count on it being too much of an amplification.



#9 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 08 July 2013 - 06:38 AM

Again, thank you ;)

 

Ok, I understand part of what you explain...

 

Juste one thing I don't understand totaly :

 

 

 


Instead of having a fixed resolution icosphere, why not start with a screen space set of vertices which would be generated based on the current view location?

 

Can you tell me more about what you mean ?

 

Don't know if it's the same process.. But in an earlier project, I was using a projective grid.. Some good results but lot artefacts and problems when camera was too close...

 

[EDITED]

My proposal, have first pass only to tessellate, and the second, using Vertex Shader for computing noise, would not be too much slower I think.. ? And could resolve totaly the problem ? (with a much more detailed sphere as base model to avoid tessellator limits)


Edited by chrisendymion, 08 July 2013 - 06:55 AM.


#10 Jason Z   Crossbones+   -  Reputation: 4683

Like
1Likes
Like

Posted 08 July 2013 - 07:08 AM

That could work - using one pass to tessellate, and then a second pass to add in the noise.  But I think you could just do the noise lookup in the domain shader in that case and produce the whole thing in one pass.  This is something you would have to try out and see which way works faster - either with stream output or directly working in the domain shader.  Just architect your code to be able to be done in discrete chunks so that you can swap them in and out for profiling.

 

The projective grid is one possibility, but you could just as easily have a flat, uniform grid of vertices based on the portion of the planet that you are near.  Just think of it as a set of patches that make up the planet.  You could have multiple resolution patches too, so that when you are close by then you switch to a smaller vertex to vertex distance.



#11 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 08 July 2013 - 07:20 AM

Things, like you describe (one pass), having vertices noise applied in the domain shader is what is working now !

The problem I was concerned (first post), was how to compute the normals with this method....

 

It's working to generate some noise points close to the base vertex, and compute the normal.. But it's a lot of GPU work... And not very well solution... 

I wanted (at start) compute the normals with the geometry shader..

But unfortunatly, it is not possible in the same pass because of the output not compatible of the domain shader (cannot output triangle_adj).

 

My ideas was : 

 

A- First pass for tessellation, Second pass (with TRIANGLE_ADJ) for noise in vertex shader and computing the normals in the geometry shader

B - Like A but using the compute shader for noise and normals after the tessellation (and a third pass for pixel shader)

 

I find not "clean" to have to compute more noise points juste to have one normal.. As if I render in multiple passes, the noise is computed only one time per vertex and I could use vertex adj to compute the normals..

 

I'm going crazy ;) Not so easy ;)

 

What you think about these normals problem ?



#12 Jason Z   Crossbones+   -  Reputation: 4683

Like
1Likes
Like

Posted 08 July 2013 - 07:43 AM

How about having a pre-calculated noise normal function?  Using the same method that you use for a scalar value in the noise function now, you can expand that to use a vector value field instead.  Then the noise lookup is essentially the same 'cost' as the regular noise lookup.  That is probably how I would approach the problem...

 

Regarding your options, I think you won't be any better off doing it in two passes instead of one.  You can output a triangle list to your geometry shader, which means you will have one face to calculate a normal vector from.  However, if you do a first pass for tessellation and then calculate the normal in the second pass, you still only know about one face per vertex - there is no way to get the adjacency information back.  So you would be just as good off to use the single pass with a geometry shader working on the face normal.



#13 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 08 July 2013 - 07:52 AM


How about having a pre-calculated noise normal function?  Using the same method that you use for a scalar value in the noise function now, you can expand that to use a vector value field instead.  Then the noise lookup is essentially the same 'cost' as the regular noise lookup.  That is probably how I would approach the problem...

 

Oh oh.. Very intersting.. I never think about that... I must investigate how to do that (do you have some links ?).. But your approah seems to be the best !! Thank you very much ;)

 


Regarding your options, I think you won't be any better off doing it in two passes instead of one.  You can output a triangle list to your geometry shader, which means you will have one face to calculate a normal vector from.  However, if you do a first pass for tessellation and then calculate the normal in the second pass, you still only know about one face per vertex - there is no way to get the adjacency information back.  So you would be just as good off to use the single pass with a geometry shader working on the face normal.

 

And if I cannot have adjacency informations in the second pass.. I don't have much choices...

 

Can I ask you, if you were working on the same project, what "solution" you think you could use ? Big steps, not details ;) 

For info, it's a personal project only, i'm not expert in graphics development ;)



#14 Jason Z   Crossbones+   -  Reputation: 4683

Like
1Likes
Like

Posted 08 July 2013 - 08:08 AM

I would go for the pre-calculated solution.  That would let you do as complicated of algorithm as you could possibly want to generate the normal vectors from your noise function, and then store the noise value and the normal vector together in your noise structure.  Then when you do the perlin lookup, just lookup the 4-component value and normalize the normal vector portion of the values.

 

I think that would simplify the process, keep the number of noise lookups down, and let you work in a single pass - which should handle all of your requirements!  The only work is to pre-calculate the normal vectors and ensure that you won't run into any situations where you end up with a <0,0,0> value from your routine.



#15 chrisendymion   Members   -  Reputation: 169

Like
1Likes
Like

Posted 10 July 2013 - 06:01 AM

I never talked about my other goals with this project... It's a lot of work, I know, but I'll try ;)

 

One pass will never be sufficient for all my needs..

 

- GPU terrain rendering with "unlimited" details (from planet scale to one meter)

- Terrain self shadowing from directionnal light (pretty results with shadow map and blure now)

- Ability to hand modify the terrain, store and re-use the modifications (no ideas for the moment...)

- Terrain logics, featuring differents noise parameters to have mountains, oceans, etc....

- Terrain logics, rivers and other stuffs...

- Collision detection (perhaps in GPU with compute shader)

- ...

 

Before, I made a project like this, but with flat, limited, pre-computed map.. Not spherical, not noise in GPU...

I know that I cannot have a real-time algo for all this stuff, there will be some pre-computed data (like a logical map with zones, river, etc.).

 

My actual graphic engine is sufficient for tests and going forward.. I miss the global logic/architecture now.. It's more a brain problem, not technic ;)

 

Perhaps, do you have some suggestions, ideas for generals steps ?

 

Thank you again for your time and your help !!

 

PS : I tried to calculate the normal in the noise algo, using derivatives.. Not so easy but I have some results.. But not good enough, I think I'm missing something...

Perhaps, for other needs, I will use the compute shader and so, use it to compute correctly the normals...



#16 Jason Z   Crossbones+   -  Reputation: 4683

Like
1Likes
Like

Posted 11 July 2013 - 04:58 AM

I haven't done any larger scale terrain renderers, although it does seem to be a popular topic.  You might be interested in checking out the tiled resources in D3D11.2, as they could help you with texture variation over the surface of the planet.

 

One thing that comes to mind is that you should generate the geometry used for rendering in caches.  I presume your geometry will change based on the location of your camera, so if your camera stays in one general area then you should build the geometry and then cache it if possible to minimize the time used to build it up.  This would play nicely with your ideas of direct modification and re-saving too.  And you would also be able to use the GPU with stream output for optimized computation.

 

If you have any screen shots I would love to see how it looks now!



#17 chrisendymion   Members   -  Reputation: 169

Like
0Likes
Like

Posted 11 July 2013 - 05:40 AM

Again, thank you ;) Interesting feature with D3D11.2.. I just need to move on Windows 8 !! 

 

Because of the LOD (tessellation, noise lookups, etc.), the geometry will change very often.. But it could be very efficient to limit the world building when not necessary.. 

I must work on it ;)

 

Except an old screen from when I worked on shadows, my project in divised in several branches, where I test separatly (with specifics geometries) the differents parts of the main project. So I don't have, for the moment, good lookings screenshots.. Perhaps I will open a little blog when the project will be more advanced ;)

 

The graphic engine is in a good way, but there is so work to do with the world builder (and not so time)... 

 

 

oldhouse.jpg



#18 Jason Z   Crossbones+   -  Reputation: 4683

Like
1Likes
Like

Posted 12 July 2013 - 04:53 AM

That's always the case - more features to add than time permits ;)  That screenshot looks pretty good in any case, so keep up the good work!  You can always start a development journal here on gamedev.net if it would be useful for your purposes...






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS