The problem I get is that the texture coordinates seem to be misunderstood by the shader, so I get the quad I'm drawing drawn as if it's texture coordinates were (0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (1.0, 0.0) or something like that... My guess is that when I add the color entry I get some kind of incorrect alignment when data is fetched for the last vertices, or something like that, because of an error either on the application side or in the receiving vertex buffer (the struct App_Input)... Anyway here's the vertex structure and FVF used in the application:
struct MYVERTEX {D3DXVECTOR3 position;D3DXVECTOR2 textureCoordinates;DWORD color;};
The FVF:
#define MYVERTEX_FVF (D3DFVF_XYZ | D3DFVF_TEX1 | D3DFVF_DIFFUSE)
and order is correct according to MSDN...
Quote:
- A dword color value is mapped to a range of [0..1] when it arrives to the vertex shader.
Ok, so does it becomes a float4? Should modify my vertex shader input struct to:
struct App_Input{ float3 vertexPos : POSITION; float2 texture0 : TEXCOORD0; float4 color : COLOR0;};
The problem is that when I try this I still get the same strange rendering behavior... Is it incorrect to use a float4 for the color?
Also, for the other key part - moving this color info from vertex shader to pixel shader. Once I get the first part to work I know I need to edit the line in the vertex shader:
vs_out.color = float4(1.0, 1.0, 1.0, 1.0);
to
vs_out.color = IN.color;
But after that I don't know how to transfer it correctly to the pixel shader? Can I just add another input argument to the pixel shader, or do I somehow need to use semantics to match it to the earlier returned color value?
[Edited by - all_names_taken on June 7, 2006 6:24:17 AM]