Vertex Shader and D3DTLVERTEX

Started by
7 comments, last by Evil Steve 12 years ago
Hello,

I'm trying to implement a vertex shader in my program.
I want to input a Transformed, Lit vertex via DrawPrimitiveUP

However, my vertex shader seems not to take effect.

What's the problem?

Vertex shader code (it compiles without error):

struct VertexIn
{
float4 Pos : POSITION;
float4 Diffuse : COLOR0;
float4 Specular : COLOR1;
float2 tex : TEXCOORD0;
};

struct VertexOut
{
float4 Pos : POSITION;
float4 Color : COLOR;
};

VertexOut VShader( VertexIn In )
{
VertexOut Out;

Out.Pos = In.Pos;
Out.Color = float4(1,1,1,1);

return Out;
}

technique FirstTechnique
{
pass FirstPass
{
Lighting = FALSE;
ZEnable = TRUE;

VertexShader = compile vs_2_0 VShader();
}
}


FVF vertex structure D3DTLVERTEX thats input via DrawPrimitiveUP:

struct D3DTLVERTEX
{
float x, y, z, rhw;
D3DCOLOR diffuse;
D3DCOLOR specular;
float tu, tv;
}


Any idea?
Advertisement
Hi,

Typically the vertex shader is disabled when the vertex structure contains transformed vertices.

However, as far as I know, you should be able to pass 4 component vector (not divided by the w component yet) as position to the vertex shader and pass it through without transformations. You'll just need to lie to Direct3D that the position isn't pre-transformed, so that the vertex shader doesn't get disabled.

Cheers!
Ouch, ok, thanks.

I'll confirm that later if it's definitely caused by being transformed vertices.
Yeah D3DFVF_XYZRHW ignores the vertex pipeline
Bummer..
Maybe [color=#2A2A2A]D3DFVF_XYZW will do the trick?
[color=#2A2A2A]Have you considered using vertex declaration? The FVF (flexible vertex format) stuff is rather unflexible.

[color=#2A2A2A]Cheers!
I've tried tricking D3D by using D3DFVF_XYZW yes, but then the screen goes black.

So I'm assuming I have to do some additional vertex processing in the vertex shader, but I don't know what. Very little information about that on the internet.
The vertex shader needs to output vertices in clip space (Which is (-1, -1) to (+1, +1) in X and Y I think?). If you're passing in screen coordinates, then you'll need to scale the coordinates appropriately - e.g. for a 1280x720 screen resolution, you'd want:

Out.pos.x = In.pos.x / 640.0 - 1.0;
Out.pos.y = In.pos.y / 360.0 - 1.0;
Out.pos.z = 0;
But in a normal scenario, you pass IN a vertex in the vertex shader thats in object space. And then inside the vertex shader you have to mul() it with the WorldViewProj Matrix.

And so the out vertex is normally in screen space (projection).

Why in this case it has to be in clip space?

Or I'm thinking wrong?

EDIT: I tried what you said Evil Steve and that worked! :) (at least partially):
Had to switch the y one. And z can't be null because of z buffer.
w has to be 1.



Out.pos.x = In.pos.x / 640.0 - 1.0;
Out.pos.y = 1.0 - In.pos.y / 360.0;
Out.pos.z = In.pos.z;
Out.pos.w = 1;

But in a normal scenario, you pass IN a vertex in the vertex shader thats in object space. And then inside the vertex shader you have to mul() it with the WorldViewProj Matrix.

And so the out vertex is normally in screen space (projection).

Why in this case it has to be in clip space?

Or I'm thinking wrong?
The projection matrix puts coordinates into clip space, not screen space. It's the job of the rasterizer (I think, not 100% sure) to convert the vertex coordinates into viewport (screen) space. You can multiply a vector by the world*view*projection matrix and see the sort of coodinate ranges you get out.


EDIT: I tried what you said Evil Steve and that worked! :) (at least partially):
Had to switch the y one. And z can't be null because of z buffer.
w has to be 1.
Ah, I forgot about w :)

This topic is closed to new replies.

Advertisement