I used to handle transformation matrices (world, view, projection) at "device level", to say so. That is, I used calls like this:
m_pd3dDevice->SetTransform( D3DTS_WORLD, pNewVal );
m_pd3dDevice->SetTransform( D3DTS_VIEW, &matView );
m_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj );
Everything was working perfectly that way. But when I added support for the HLSL shader model 3.0, I was forced to define a vertex shader (my effect wouldn't compile). So far, I hadn't needed a vertex shader because there was no need for custom transformations at that level. Anyway, it seems that when you define your vertex shader, you must perform the world-view-projection transformation there. That was the way I understood it, correct me if I'm wrong. So my vertex and pixel shader ended up like this:
sampler2D Texture0;
float4x4 g_matProjection;
float4x4 g_matView;
float4x4 g_matWorld;
struct VertexShaderInput{
float4 Position : POSITION;
float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput {
float4 Position : POSITION;
float2 TexCoord : TEXCOORD0;
};
VertexShaderOutput BasicVertexShader( VertexShaderInput input ) {
VertexShaderOutput output;
output.Position = mul( input.Position, g_matWorld );
output.Position = mul( output.Position, g_matView );
output.Position = mul( output.Position, g_matProjection );
output.TexCoord = input.TexCoord; //Just pass it along
return output;
}
struct PixelShaderOutput {
float4 color : COLOR0;
};
PixelShaderOutput PixelShaderFirstPass( VertexShaderOutput input ) {
//Magic happens here
//We use the tex2D function to sample Texture0
}
To make this work, I replaced the SetTransform calls with calls that set the appropiate effect variable, like so:
m_pd3dDevice->SetTransform( D3DTS_WORLD, pNewVal ); // Now m_pEffect->setParameter( "g_matWorld", pNewVal );
m_pd3dDevice->SetTransform( D3DTS_VIEW, &matView ); //Now m_pEffect->setParameter( "g_matView", &matView );
m_pd3dDevice->SetTransform( D3DTS_PROJECTION, &matProj ); //Now m_pEffect->setParameter( "g_matProjection", &matProj );
void CEffect::setParameter( const string& strName, const D3DXMATRIXA16 * newVal ) {
if( !m_pEffect ) return;
D3DXHANDLE handle;
assert( handle = m_pEffect->GetParameterByName( NULL, strName.c_str() ) );
assert( SUCCEEDED( m_pEffect->SetMatrix( handle, newVal ) ) );
}
With this new setup, pan and zoom work, but when I draw my textured quads, they all end up in the same position, one on top of the other (I verified this using PIX, and I also checked that the world matrix is set right before drawing each quad, and with different values). I'm at a loss now. Did I misunderstand something about the way this should be done? The view and projection matrices are set BEFORE calling m_pEffect->Begin(), and the world transform is set between m_pEffect->Begin() and m_pEffect->End(). Could this be the reason?