Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


Geometry shader, but why ?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 smallGame   Members   -  Reputation: 208

Like
0Likes
Like

Posted 24 November 2012 - 10:50 AM

Hi,

I wanted to generate geometry such as a sphere on the geometry shader, (by sending only a point on the GPU).
But I realized we can only generate 1024 bytes with a geometry shader and that the geometry shader is not designed to generate a lot of data.

That's too bad !! Why a such design ??

Then I thought doesn't matter I am going to use the Hull and Domain shader to tesselate my sphere. But once again I was disppointed the geometry shader is after tesselator stage !!! Again : Why a such design ??

So I guess what I have to do is just generate my sphere on CPU and tesselate it on the tesselator stage; but I d like to understand why they designed the graphic pipeline that way... I am sure I am missing something...

Cheers,

Sponsor:

#2 Nik02   Crossbones+   -  Reputation: 2880

Like
4Likes
Like

Posted 25 November 2012 - 06:06 AM

You can effectively use the tessellator to generate a sphere from one point.

There are architectural and performance reasons as to why the GS is after tessellation.

Consider that the geometry shader has the limitation of memory on its output stream (which is, believe or not, well justified). The memory to which the geometry shader writes is effectively not as high performance as the memory from which the input assembler reads, as the pipeline can make very few assumptions about gs output until it is actually run.

The tessellator very much prefers a static input patch layout over potentially dynamic one (such as that which would result from GS), because configuring the tessellator is internally expensive and would potentially need to be done for each input primitive.

More practically, one common use of the geometry shader is to extrude shadow volumes from triangles. It would be very inconvenient if the geometry shader was run before the tessellator in this case.

Niko Suni


#3 kauna   Crossbones+   -  Reputation: 2747

Like
3Likes
Like

Posted 25 November 2012 - 06:54 AM

Another use of geometry shader is to control the render target in case of use render target array for rendering. Typically the geometry shader in this case is just almost a pass through shader which just sets the correct destination render target for the primitive. This helps to reduce the drawing calls when creating cascaded shadow maps.

Best regards!

#4 smallGame   Members   -  Reputation: 208

Like
0Likes
Like

Posted 25 November 2012 - 09:30 AM

Thanks a lot for your answers !

#5 Jason Z   Crossbones+   -  Reputation: 5163

Like
3Likes
Like

Posted 25 November 2012 - 08:42 PM

One other point to consider is that the geometry shader is intended to operate on all primitives. Since the tessellation stages produce primitives, it is a logical choice to have the geometry shader appear after them in the stage.

If you really want to generate the geometry in the GS and then tessellate it, you could always use the stream output functionality to capture the output from your GS and then feed that back into the pipeline to do the tessellation work. This will not be very efficient, but if you are looking for everything to be done on the GPU then this is one option.

However, whatever your desired input primitive is to generate the sphere, I have to agree with Niko that you can easily generate the base portion of the sphere in the hull shader, then tessellate it and produce the output vertices in the domain shader - it should be no problem! This would make the most efficient usage of the hardware and let you keep everything on the GPU.

#6 Tsus   Members   -  Reputation: 1049

Like
2Likes
Like

Posted 26 November 2012 - 04:31 AM

Ok, since we’re on the geometry shader's right to exist... Posted Image

Also the geometry shader allows you to dynamically insert new primitives into an existing vertex stream. Consider a particle system in which a few “launch particles” are emitted that explode after a certain time, creating multiple new “secondary particles”. By using Stream Output (steered by the geometry shader stage) you can feed the vertex data back into a vertex buffer and use it in the next frame. (See DirectXSDK, "GSParticles".)

A second -- very important -- thing about stream output is that a geometry shader always “inserts” primitives into the stream (not just appends at the end). Imagine you have a line, consisting of multiple particles. (Now comes a flow visualization example. Posted Image) If you transport (aka advect) the particles in some sort of vector field (like a fluid or turbulent air), you can use the geometry shader to refine the line segments, if two adjacent vertices (A and B) are transported too far apart. (Which is basically just inserting a new vertex between A and B.) The important thing is: the topology is preserved. You still have a point list that can be rendered as a line strip, because the order still fits since the new vertex sits between A and B in the output stream. Doing this with compute shaders is rather cumbersome. Therefore, I often use the Geometry Shader in GPGPU applications (whenever I need to “insert” data into a stream).

Alright, and to talk about spheres... You could as well generate a viewport-aligned quad in your geometry shader that covers the sphere and do an analytic ray cast onto the sphere in the pixel shader to calculate the position and normal, discarding everything outside of the sphere. If you have many spheres with varying level of detail that should be rendered in a single draw call, this approach might not be so demanding for the tessellator.

#7 wiselogi   Members   -  Reputation: 116

Like
1Likes
Like

Posted 26 November 2012 - 01:44 PM

This tutorial does exactly what you are looking for:

http://www.geeks3d.com/20101126/direct3d-11-tessellation-tutorial/

It shows how to build any parametric surface entirely in the GPU using the tessellator in one pass.

#8 smallGame   Members   -  Reputation: 208

Like
0Likes
Like

Posted 12 December 2012 - 10:51 AM

Sorry for the late answer (no internet for a bit)

If you really want to generate the geometry in the GS and then tessellate it

No I don't really want use the GS, it was just an idea to do it.

you can easily generate the base portion of the sphere in the hull shader

Ok that is a good solution, I didn't know I could do that. Thanks !!! :)

Thanks wiselogi and thanks Tsus !!!!! :)




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS