Direct3D and Cg

Started by
3 comments, last by Benden 14 years, 7 months ago
Hello, I have a vertex shader in Cg which has the following signature: void C6E2v_particle(float4 pInitial : POSITION, float4 vInitial : TEXCOORD0, float tInitial : TEXCOORD1, out float4 oPosition : POSITION, out float4 color : COLOR, out float pointSize : PSIZE, uniform float globalTime, uniform float4 acceleration, uniform float4x4 modelViewProj) In my Direct3D program, I am getting a handle to the parameters using: CGparameter timeInitial = cgGetNamedParameter(myCGProgram, "tInitial"); I can get a handle to all the parameters without any problem. However, when I try to set the parameter values, it does not work. So, for example, I am trying to set the value of the "tInitial" parameter as follows: float f = 1.0f; cgD3D9SetUniform(timeInitial, f); The call fails with "Unknown Cg error (error code 1000)". The funny thing is that I can set the values of "globalTime", "acceleration" and "modelViewProj" variables. So, everything that is declared global with the uniform identifier. What am I doing wrong? Any help would be greatly appreciated. Cheers, /x
Advertisement
cgD3D9SetUniform sets the value of a uniform parameter. You can see from what you posted that tInitial is not a uniform parameter and therefore cannot be set in this fashion.

tInitial will come from whatever vertex buffer(s) you use to render using that shader.
Hello,

Thanks for your reply. I had the same feeling but I have no idea how to set the vertex buffers to do this sort of thing.

I know in OpenGL, I can do something like:

glMultiTexCoord1f(GL_TEXTURE1, mytInitial);

How do I achieve something similar using DirectX API?

Many thanks,

/x
Thanks for your reply.

After looking at some DirectX documentation, I guess I need to create a vertex buffer and bind it to the data stream.

So, I am guessing I can do something like this:

// Create a struct definition for holds my attributes. For my example:
struct My_Attributes
{
float p_initial_x, p_initial_y, p_initial_z; // initial pos
float v_initial_x, v_initial_y, v_initial_z; // initial velocity
float tInitial; // initial time
};

And then call CreateVertexBuffer and subsequently call SetStreamSource() in my render method. Would this be a good way to do this?

My question is that would this be interpreted correctly by the GPU vertex shader. In an example from the CG toolkit, I see the following:

The vertex shader is defined as:

C3E2v_Output C3E2v_varying(float2 position : POSITION,
float3 color : COLOR,
float2 texCoord : TEXCOORD0)


However, the C structure in the calling program is defined as

struct MY_V3F {
FLOAT x, y, z;
DWORD color; // RGBA
};

And a vertex buffer is created as follows:
/* Initialize three vertices for rendering a triangle. */
static const MY_V3F triangleVertices[] = {
{ -0.8f, 0.8f, 0.0f, 0xFF0000 }, /* red */
{ 0.8f, 0.8f, 0.0f, 0x00FF00 }, /* green */
{ 0.0f, -0.8f, 0.0f, 0x0000FF } /* blue */
};


Even though the position is a float2 in the vertex program, the structure defines it as a 3D point instead of a 2D one. Everything draws correctly on the screen, so I am a bot confused as to how everything is interpreted internally. Also, no texture coordinates are supplied, even though it is mentioned as one of the inputs.

Maybe someone can shed some light on this mystery...

Thanks,

/x
Eyup

Your on the right track...

You basically create your data and upload it to a vertex buffer which is then used by the vertex shader.

I knows how to interpret the data via the FVF set when you call 'CreateVertexBuffer'. In this case it is D3DFVF_XYZ|D3DFVF_DIFFUSE (check the msdn for info on the FVF's).

The D3DFVF_XYZ matches up the first 3 floats in the MY_V3F structure to the 'position : POSITION' input while matching the D3DFVF_DIFFUSE to the 'color : COLOR' input. The fact that the 'position' is only a float2 instead of a float3 just means its ditching the z-component.

Don't worry about the texture coordinates not being supplied...that stream will just be full of rubbish (as its not specified in the vertex buffer), but if you are looking at the sample i think you are then you'll notice the texture coord input is not used anyways (see the fragment shader)

This topic is closed to new replies.

Advertisement