Adding 3.0 Shader Model support

Started by
4 comments, last by d h k 12 years, 4 months ago
I have an application which renders using Direct3D9, more specifically Pixel Shaders via the Effect framework. I have a .fx file, with techniques for 2.0, 2.a and 2.b models. All these techniques have just a pixel shader compiled against the respective model, and I select the best one for the given GPU using the FindNextValidTechnique function. So my techniques look like this:



technique compileFor2_b {
pass P0 {
PixelShader = compile ps_2_b ps2FirstPassIgnoringBackground();
}
}



However, if I try to write one of these for 3.0 or above, I get an error saying I must implement a vertex shader. The problem is that, according to what I read, and please correct me if I'm wrong, that would imply that view and projection would now have to be done in the vertex shader. Right now, I'm doing this:



void CD3DDevice::Configure( const CCamera& camera ) {
D3DXMATRIXA16 matView = camera.calculateViewMatrix();
m_pd3dDevice->SetTransform( D3DTS_VIEW, &matView );
D3DXMATRIXA16 matProj = camera.calculateProjectionMatrix();
m_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
}



So many questions...

  1. Do I have to delete this method and implement its equivalent in a Vertex Shader for all techniques?
  2. If the answer to 1 is yes, how do I do that? Or kindly point me to some book or resource. I've done some pixel shaders, but never a vertex shader.
  3. What about cards which support Pixel Shaders but don't support Vertex Shaders? How do I provide a fallback mechanism?
Advertisement
Yes, you do have to implement a vertex shader with SM 3.0, but it can be quite a simple shader. Just multiply the vertex position by your matrixes (you should premultiply the matrixes on the CPU although the Efects framework may pull that out as a pre-shader) and pass everything else through to the pixel shader; there are plenty of examples in the SDK to help you here.

The same shader can be reused for each technique; it will just compile under a different shader model, so that cuts down on code somewhat.

Cards that support PS but not VS don't really exist. I assume you're specifically referring to older Intels here, but in that case the T&L stages will run in software, and D3D will provide software emulation of the vertex shader too, which works and performs quite well.

Alternately you can of course only specify a vertex shader for SM3+ and leave the rest of your techniques unchanged.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


Yes, you do have to implement a vertex shader with SM 3.0, but it can be quite a simple shader. Just multiply the vertex position by your matrixes (you should premultiply the matrixes on the CPU although the Efects framework may pull that out as a pre-shader) and pass everything else through to the pixel shader; there are plenty of examples in the SDK to help you here.


Ok, sounds reasonable. I still have one doubt though:

What about the view and projection matrix handling I'm currently doing in my D3DDevice? (the second code snippet in my first post) Should I stop doing that and leave it all up to the vertex shader? Instead of calling SetTransform for the ID3DDevice, I should update the vertex shader's matrices, shouldn't I?
To the best of my knowledge: SetTransform only affects the fixed-function pipeline and becomes absolutely redundant when you write your own shaders and feed them world, view and projection matrices so they can bring your vertices from their original into screen space. Hope this helps!
I have just written the vertex shader, and it doesn't crash, but now zooming and panning (which were done using the view and projection matrices) don't work, and I can see black lines between my texture quads.

This is my code before adding the vertex shader (I used only a pixel shader); everything works fine:



struct PS_OUTPUT {
float4 color : COLOR0;
};
PS_OUTPUT ps2FirstPassIgnoringBackground( float2 vTexCoord : TEXCOORD0, float4 vColor : COLOR ) {
PS_OUTPUT output;
if( g_bEnableLinearFilter )
output.color = applyLinearFilter( vTexCoord, Texture0 );
else
output.color = tex2D( Texture0, vTexCoord );
output.color = applyLutIgnoringBackground( output.color );
output.color = HideOverlappedPixels( output.color, vTexCoord );
return output;
}


Adding the vertex shader, I changed it to this (everything else remained just like it was)


//Other declarations omitted because they didn't change
float4x4 g_matProjection;
float4x4 g_matView;

struct VS_OUTPUT {
float4 pos : POSITION;
float2 TextureUV : TEXCOORD0;
};
VS_OUTPUT vs( float4 pos : POSITION,
float2 vTexCoord0 : TEXCOORD0 ) {
VS_OUTPUT output;
output.pos = mul( pos, g_matView );
output.pos = mul( output.pos, g_matProjection );
output.TextureUV = vTexCoord0;
return output;
}


struct PS_OUTPUT {
float4 color : COLOR0;
};

PS_OUTPUT ps2FirstPassIgnoringBackground( VS_OUTPUT input ) {
PS_OUTPUT output;
if( g_bEnableLinearFilter )
output.color = applyLinearFilter( input.TextureUV, Texture0 );
else
output.color = tex2D( Texture0, input.TextureUV );
output.color = applyLutIgnoringBackground( output.color );
output.color = HideOverlappedPixels( output.color, input.TextureUV );
return output;
}


Now, instead of using SetTransform, I use this:



void CEffect::Configure( const CCamera& camera ) {
setParameter( "g_matProjection", &camera.calculateProjectionMatrix() );
setParameter( "g_matView", &camera.calculateViewMatrix() );
}


void CEffect::setParameter( const string& strName, const D3DXMATRIXA16 * newVal ) {
if( !m_pEffect ) return;
D3DXHANDLE handle;
assert( handle = m_pEffect->GetParameterByName( NULL, strName.c_str() ) );
assert( SUCCEEDED( m_pEffect->SetMatrix( handle, newVal ) ) );
}


Where did I mess up?
You're missing the world matrix transformation from the looks of it! You need to transform each vertex in the vertex shader by the world * view * projection matrix in order to bring them all the way from their original space (I call that model space) over into world space, then into view space, and then into screen space (often referred to as projection space).

Apart from that I can't find any issues with your shader at a glance. So you want to add:


// top of the shader file
float4x4 g_matWorld;

// in the vertex shader add the first line of the three
output.pos = mul( pos, g_matWorld );
output.pos = mul( output.pos, g_matView );
output.pos = mul( output.pos, g_matProjection );

// in your game
void CEffect::Configure( const CCamera& camera ) {
setParameter( "g_matWorld", &model.calculateWorldMatrix() );
setParameter( "g_matView", &camera.calculateViewMatrix() );
setParameter( "g_matProjection", &camera.calculateProjectionMatrix() );
}


The world matrix holds the position, rotation and scale of what you're trying to render. If you weren't constructing and storing these for all your renderable entities (models and so on), then you should, there's plenty of guides out there to help you with that.

I'm not sure that this is the solution to your problem, however. You should check your function calls when creating the effect and when sending the matrices over to it, make sure none of them fail or anything.

It's also more optimal to compute the world * view * projection matrix before and then sending the result of that multiplication over to the vertex shader. That way you don't make the multiplication for every single vertex but once per draw call instead.

Hope this helps!

This topic is closed to new replies.

Advertisement