I thought, rather than create and alter vertex buffers at run-time, I could just leave position info out of the vertex buffer entirely and specify the coordinates to use with a constant buffer instead. Then I can just send the coodinates of the corners to the GPU (four float2s) before I draw. My intention was to have the vertex shader pick these up and set the vertices to the appropriate values.
A problem I hit was how to get the vertex shader to use a different coordinate for each vertex in the primitive. The solution I came up with was to have the screen space positions in an array in a constant buffer and have a static int representing the offset into this array. I then increment this static variable after each vertex has been processed, and have it loop back to zero once it reaches 4.
I thought this would work elegantly but it doesn't seem to be incrementing the offset. I've run it though PIX and it doesn't even seem to be compiling the offset factor into the shader code at all. Am I doing something wrong, or is there just no way to do it like this? Or am I being stupid doing it like this at all?
Here is my shader code:
// SHADER_QUAD
static int PosArrayCounter;
cbuffer posArray
{
float2 Positions[4];
};
Texture2D Image;
struct VS_OUTPUT
{
float4 PosH : SV_POSITION;
float2 Tex : TEXCOORD1;
};
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
VS_OUTPUT VS(float2 Tex : TEXCOORD)
{
VS_OUTPUT output = (VS_OUTPUT)0;
output.PosH = float4(Positions[PosArrayCounter], 1, 1);
output.Tex = Tex;
PosArrayCounter++;
if (PosArrayCounter > 3)
{
PosArrayCounter = 0;
}
return output;
}
float4 PS( VS_OUTPUT input ) : SV_Target
{
float4 colour = Image.Sample(samLinear, input.Tex);
return colour;
}
//--------------------------------------------------------------------------------------
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}