geometry shader basic example

Started by
3 comments, last by Bacterius 12 years, 1 month ago
So here is the thing,

I need a really really basic example of a GS with SlimDX, more specifically I would like an example of how to do the following.

Example #1: I would like to send a single vertex point to the GS and have the GS make a square (1 face of a cube).

Example #2: I would like to send a single vertex point to the GS with a color assignment and have the GS convert that vertex into a cube of the desired color (all faces with the same color).

Example #3: Send the single vertex, with a color, and brightness of 1 to 16, and have the cube made on the GS lit from 1/16 to 16/16, based on that number.

I would be of great help in my new project, as this is the first time I use some other than XNA, and all this stuff is new to me overall.

Appreciate all the help, tx
Advertisement
Why do you need these examples specifically? Here is the basic skeleton of a point-sprite geometry shader (point->screen space quad), so you can understand how it works (it's in fact a little more complex especially with the different coordinate spaces but the idea is there). For Example #2 it's just a matter of streaming per-vertex parameters (in this case, color) from the geometry shader to the pixel shader, pretty trivial. Example #3 does not build on the geometry shader (the brightness calculation is done in the pixel shader). Here is the skeleton:


struct GeometryIn // geometry shader input
{
float4 p: POSITION; // point sprite's center (in clip space)
};

struct PixelIn // pixel shader input vertex
{
float4 p: SV_POSITION; // rasterized position
float2 uv: TEXCOORD0; // uv coordinates (for texturing)
};

[maxvertexcount(4)] // point sprite quad, so 4 vertices
void main( point GeometryIn input[1], inout TriangleStream<PixelIn> stream )
{
// size of your quad (you can make this a shader variable, or even a per-vertex parameter, if you must)
static const float scale = 0.01;

// find the four vertices of the quad from the point sprite
float4 v1 = input[0].p + float4(+scale * 0.5, +scale * 0.5, 0, 0);
float4 v2 = input[0].p + float4(+scale * 0.5, -scale * 0.5, 0, 0);
float4 v3 = input[0].p + float4(-scale * 0.5, -scale * 0.5, 0, 0);
float4 v4 = input[0].p + float4(-scale * 0.5, +scale * 0.5, 0, 0);

// stream out each vertex with the proper UV's
PixelIn output;
output.uv = float2(1, 1);
output.p = v1;
stream.Append(output);


output.uv = float2(1, 0);
output.p = v2;
stream.Append(output);


output.uv = float2(0, 0);
output.p = v3;
stream.Append(output);


output.uv = float2(0, 1);
output.p = v4;
stream.Append(output);

// we're done!
}

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Just realised you might also want how to use the geometry shader within the SlimDX code, well, unless you are going to use stream output it's pretty much identical to how you handle vertex and pixel shaders. If you're using the effects framework you just have to add the GS declaration in the technique, and if you're using raw shaders you just compile it as you would compile a PS or a VS but using the gs_4_0 (or gs_5_0) compiler.

If you're using stream output it becomes fairly more complex but I think you don't need that just yet.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

I'm making a proof of concept of a terrain made of cubes, with several techniques on XNA, I have made some tests with optimization on the CPU, maximizing the view distance, but I seem to be needing more, someone told me i should try the GS, but its not available in XNA as it uses DX9, so, I'm tring some simple examples in slimDX, and wanted to start playing around with the GS but I could not find an example code.

ok, so in the example you just put, you are doing like a front quad face from a point, without moving on the Z axis, am I correct?, but how does it know how to put the faces together (make the lines between the vertexes)?
It's making a point sprite which is just a 2D texture drawn onto the screen (since that's one of the most common uses of geometry shaders). If you want to create a 3D face of a cube instead you can adapt the code to modify the output vertices.

The GPU knows how to link the vertices because you are streaming out the vertices in a TriangleStream. Thus when you append vertices via the Append() method it creates triangle strips (if you call RestartStrip(), you restart the strip, so calling it every three vertices will create a triangle list instead).

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

This topic is closed to new replies.

Advertisement