[SOLVED] Shader Noob help

Started by
8 comments, last by Timptation 14 years, 11 months ago
What am I doing wrong here? I have the following errors: error X4502: invalid output semantic 'POSITION': Legal indices are in [0,0] error X4502: invalid vs_1_1 output semantic 'POSITION' error X4502: invalid output semantic 'COLOR': Legal indices are in [0,1] error X4502: invalid vs_1_1 output semantic 'COLOR' I'm using the following FVF: struct NormalTexVertex { float x, y, z; // Position. float nx, ny, nz; // Normal vector. float u, v; // Texture coordinates. }; #define FVF_NORMAL_TEX (D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX1) I'm using the following output struct: struct VS_OUTPUT { float position[3] : POSITION; float diffuse[3] : COLOR; float uv[2] : TEXCOORD; }; [Edited by - Timptation on May 16, 2009 9:07:05 PM]
Advertisement
They're not arrays. They're vectors and can be accessed using .xyzw or .rgba (or any combination, single or otherwise)

struct VS_OUTPUT
{
float3 position : POSITION;
float3 diffuse : COLOR;
float2 uv : TEXCOORD0;
};

That is how you would correctly access vertex position/color/uv. Note the float3/float2 the number is the size of the vector. Hope that helps.
That did help, but now I've got:

Direct3D9: (ERROR) :Vertex shader function usage (D3DDECLUSAGE_TEXCOORD,0) does not have corresponding usage in the current vertex declaration

But I DO have that don't I?

SHADER:

struct VS_INPUT{
float3 position : POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD;
};

struct VS_OUTPUT{
float4 position : POSITION;
float3 diffuse : COLOR;
float2 uv : TEXCOORD;
};

CODE:

struct NormalTexVertex{
float x, y, z; // Position.
float nx, ny, nz; // Normal vector.
float u, v; // Texture coordinates.
};
#define FVF_NORMAL_TEX (D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_TEX1)

I tried changing D3DFVF_TEX1 to D3DFVF_TEX0, but got the same error. I'm using D3DFVF_TEX1 because D3DFVF_TEX0 is 0 and so my define conflicts with another FVF define i have that's just (D3DFVF_XYZ | D3DFVF_NORMAL).
Have a look into "vertex declarations" instead of FVF's - they give you much better control over how your application data maps to the shader inputs. FVF's pre-date shaders.

My guess would be that you're not using the texture coordinate in your shader and it gets optimized out by the compiler. Can you post the body of your vertex shader? Have a look at fxc.exe and get it to write the compiled output to an HTML file - it'll tell you what inputs its actually going to use.


hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Jroggy has the solution, but you've copied it wrong.

The correct syntax, for texcoords is TEXCOORD0 - TEXCOORD32, TEXCOORD isn't valid.

shaken, not stirred

sirpalee: I tried defining them that way, but still got the same error. The DirectX book I have in front of me says that leaving off the number implies 0. It's a 9.0c book though, and maybe its different in 10?

I've not heard of fxc.exe, but I see it and am looking into it now. I'm aware of vertex declarations, but I also understand that FVFs are internally converted to vertex declarations, so I was hoping to get away with not having to write the vertex declaration. I probably should just to get the experience though. I'm always wanting something for nothing...

This may not work and i know its not optimized the way I'm converteing from float3 to vector and all, but this is the shader main body:

VS_OUTPUT output = (VS_OUTPUT)0;

output.uv[0] = input.uv[0];
output.uv[1] = input.uv[1];

vector outPos;
outPos.x = input.position[0];
outPos.y = input.position[1];
outPos.z = input.position[2];
outPos.w = 1.0f;

outPos = mul(outPos, ViewProjMatrix);

output.position[0] = outPos.x;
output.position[1] = outPos.y;
output.position[2] = outPos.z;
output.position[3] = 0.0f;

LightDirection.w = 0.0f;
vector inNorm;
inNorm.x = input.normal[0];
inNorm.y = input.normal[1];
inNorm.z = input.normal[2];
inNorm.w = 0.0f;
LightDirection = mul(LightDirection, ViewMatrix);
inNorm = mul(inNorm, ViewMatrix);

float s = dot(LightDirection, inNorm);

if( s < 0.0f )
s = 0.0f;

vector outDiff;
outDiff = (AmbientMtrl * AmbientLightIntensity) +
(s * (DiffuseLightIntensity * DiffuseMtrl));

output.diffuse[0] = outDiff.x;
output.diffuse[1] = outDiff.y;
output.diffuse[2] = outDiff.z;

return output;
I'm running, but I don't see anything but my cleared screen. Rather than use FVFs I have this now:

// Trying out vertex declaration.
D3DVERTEXELEMENT9 vdecl[] = {
{0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},
{0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0},
{0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0},
D3DDECL_END() };

device->CreateVertexDeclaration(vdecl, &g_vertDecl);
device->SetVertexDeclaration(g_vertDecl);
Looks to me like the shader you posted is still using the input as arrays instead of float2/3 vectors.
My input struct is:
struct VS_INPUT{
float3 position : POSITION;
float3 normal : NORMAL;
float2 uv : TEXCOORD0;
};

"float3" being a predefined type for "vector<float, 3>" for example. Is this wrong? Changing it to the "vector<float, x>" format didnt change anything...
Feel so awesome for figuring this out lol. The problem was that I was multiplying a 1x3 vector with a 4x4 matrix. I can't believe that wasn't an error. Perhaps it automatically filled in the extra space with zeros when i needed ones. Anyway, thanks a bunch for the help guys!

This topic is closed to new replies.

Advertisement