Geometry shader, but why ?

Started by
6 comments, last by smallGame 11 years, 4 months ago
Hi,

I wanted to generate geometry such as a sphere on the geometry shader, (by sending only a point on the GPU).
But I realized we can only generate 1024 bytes with a geometry shader and that the geometry shader is not designed to generate a lot of data.

That's too bad !! Why a such design ??

Then I thought doesn't matter I am going to use the Hull and Domain shader to tesselate my sphere. But once again I was disppointed the geometry shader is after tesselator stage !!! Again : Why a such design ??

So I guess what I have to do is just generate my sphere on CPU and tesselate it on the tesselator stage; but I d like to understand why they designed the graphic pipeline that way... I am sure I am missing something...

Cheers,
Advertisement
You can effectively use the tessellator to generate a sphere from one point.

There are architectural and performance reasons as to why the GS is after tessellation.

Consider that the geometry shader has the limitation of memory on its output stream (which is, believe or not, well justified). The memory to which the geometry shader writes is effectively not as high performance as the memory from which the input assembler reads, as the pipeline can make very few assumptions about gs output until it is actually run.

The tessellator very much prefers a static input patch layout over potentially dynamic one (such as that which would result from GS), because configuring the tessellator is internally expensive and would potentially need to be done for each input primitive.

More practically, one common use of the geometry shader is to extrude shadow volumes from triangles. It would be very inconvenient if the geometry shader was run before the tessellator in this case.

Niko Suni

Another use of geometry shader is to control the render target in case of use render target array for rendering. Typically the geometry shader in this case is just almost a pass through shader which just sets the correct destination render target for the primitive. This helps to reduce the drawing calls when creating cascaded shadow maps.

Best regards!
Thanks a lot for your answers !
One other point to consider is that the geometry shader is intended to operate on all primitives. Since the tessellation stages produce primitives, it is a logical choice to have the geometry shader appear after them in the stage.

If you really want to generate the geometry in the GS and then tessellate it, you could always use the stream output functionality to capture the output from your GS and then feed that back into the pipeline to do the tessellation work. This will not be very efficient, but if you are looking for everything to be done on the GPU then this is one option.

However, whatever your desired input primitive is to generate the sphere, I have to agree with Niko that you can easily generate the base portion of the sphere in the hull shader, then tessellate it and produce the output vertices in the domain shader - it should be no problem! This would make the most efficient usage of the hardware and let you keep everything on the GPU.
Ok, since we’re on the geometry shader's right to exist... smile.png

Also the geometry shader allows you to dynamically insert new primitives into an existing vertex stream. Consider a particle system in which a few “launch particles” are emitted that explode after a certain time, creating multiple new “secondary particles”. By using Stream Output (steered by the geometry shader stage) you can feed the vertex data back into a vertex buffer and use it in the next frame. (See DirectXSDK, "GSParticles".)

A second -- very important -- thing about stream output is that a geometry shader always “inserts” primitives into the stream (not just appends at the end). Imagine you have a line, consisting of multiple particles. (Now comes a flow visualization example. smile.png) If you transport (aka advect) the particles in some sort of vector field (like a fluid or turbulent air), you can use the geometry shader to refine the line segments, if two adjacent vertices (A and B) are transported too far apart. (Which is basically just inserting a new vertex between A and B.) The important thing is: the topology is preserved. You still have a point list that can be rendered as a line strip, because the order still fits since the new vertex sits between A and B in the output stream. Doing this with compute shaders is rather cumbersome. Therefore, I often use the Geometry Shader in GPGPU applications (whenever I need to “insert” data into a stream).

Alright, and to talk about spheres... You could as well generate a viewport-aligned quad in your geometry shader that covers the sphere and do an analytic ray cast onto the sphere in the pixel shader to calculate the position and normal, discarding everything outside of the sphere. If you have many spheres with varying level of detail that should be rendered in a single draw call, this approach might not be so demanding for the tessellator.
This tutorial does exactly what you are looking for:

http://www.geeks3d.com/20101126/direct3d-11-tessellation-tutorial/

It shows how to build any parametric surface entirely in the GPU using the tessellator in one pass.
Sorry for the late answer (no internet for a bit)


If you really want to generate the geometry in the GS and then tessellate it

No I don't really want use the GS, it was just an idea to do it.


you can easily generate the base portion of the sphere in the hull shader

Ok that is a good solution, I didn't know I could do that. Thanks !!! :)

Thanks wiselogi and thanks Tsus !!!!! :)

This topic is closed to new replies.

Advertisement