Sign in to follow this  

How to replace SetFVF with vertex shader?

This topic is 3570 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have the program with pixel shader, which draws 2D image in a rectangle in orthogonal projection. This works well, but one of display cards doesn't show an image giving error message "Vertex shader v. 3.0" is required. Though this looks like driver bug, I must support this card, and need to replace my D3D drawing rectangle code with vertex shader. The code is quite simple: it draws textured 2D rectangle in the center of the screen, in Z=0 plane, with IMAGE_SIZE size:
// Vertices
LPDIRECT3DVERTEXBUFFER9 g_pVertices = NULL;

struct PANELVERTEX
{
    FLOAT x, y, z;
    DWORD color;
    FLOAT u, v;
};

#define D3DFVF_PANELVERTEX (D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_TEX1)

// Initialization
void PostInitialize(HWND hWnd)
{
    ...

    g_pd3dDevice->SetVertexShader(NULL);
    g_pd3dDevice->SetFVF(D3DFVF_PANELVERTEX);

    g_pd3dDevice->SetStreamSource(0, g_pVertices, 0, sizeof(PANELVERTEX));
}

// Initialize vertices
void CreateVertices()
{
    float PanelWidth = IMAGE_SIZE;     // 1024
    float PanelHeight = IMAGE_SIZE;


    g_pd3dDevice->CreateVertexBuffer(4 * sizeof(PANELVERTEX), D3DUSAGE_WRITEONLY,
        D3DFVF_PANELVERTEX, D3DPOOL_MANAGED, &g_pVertices, NULL);

    PANELVERTEX* pVertices = NULL;
    g_pVertices->Lock(0, 4 * sizeof(PANELVERTEX), (void**)&pVertices, 0);

    //Set all the colors to white
    pVertices[0].color = pVertices[1].color = pVertices[2].color = pVertices[3].color = 0xffffffff;

    //Set positions and texture coordinates
    pVertices[0].x = pVertices[3].x = -PanelWidth / 2.0f;
    pVertices[1].x = pVertices[2].x = PanelWidth / 2.0f;

    pVertices[0].y = pVertices[1].y = PanelHeight / 2.0f;
    pVertices[2].y = pVertices[3].y = -PanelHeight / 2.0f;

    pVertices[0].z = pVertices[1].z = pVertices[2].z = pVertices[3].z = 1.0f;

    pVertices[1].u = pVertices[2].u = 1.0f;
    pVertices[0].u = pVertices[3].u = 0.0f;

    pVertices[0].v = pVertices[1].v = 0.0f;
    pVertices[2].v = pVertices[3].v = 1.0f;

    g_pVertices->Unlock();
}

// WM_SIZE handling
void SetViewParameters(HWND hWnd)
{
    RECT rect;
    GetClientRect(hWnd, &rect);

    float WindowWidth = (float)(rect.right - rect.left);
    float WindowHeight = (float)(rect.bottom - rect.top);

    float viewWidth = IMAGE_SIZE;
    float viewHeight = IMAGE_SIZE;

    if ( WindowWidth > WindowHeight )
    {
        viewWidth *= (WindowWidth/WindowHeight);
    }
    else
    {
        viewHeight *= (WindowHeight/WindowWidth);
    }


    D3DXMATRIX Ortho2D;	
    D3DXMATRIX Identity;

    D3DXMatrixOrthoLH(&Ortho2D, viewWidth, viewHeight, 0.0f, 1.0f);
    D3DXMatrixIdentity(&Identity);

    g_pd3dDevice->SetTransform(D3DTS_PROJECTION, &Ortho2D);
    g_pd3dDevice->SetTransform(D3DTS_WORLD, &Identity);
    g_pd3dDevice->SetTransform(D3DTS_VIEW, &Identity);
}

// Rendering
void Render2D()
{
   ...

    g_pd3dDevice->DrawPrimitive(D3DPT_TRIANGLEFAN, 0, 2);
}




Now I need to do the same with vertex shader. I know how to compile and set vertex shader in 3D device. But I don't know what should be vertex shader code, and what part of my code shown here should be removed and replaced by vertex shader.

Share this post


Link to post
Share on other sites
As i can see this is your first time with vertex shader...anyway

1) You have to create an adeguate vertex shader code
1.1) Define your vertex shader input: from the fvf, i can see you need XYZ coordinates, DIFFUSE and Texcoord. So you will make this in vertex shader code


struct VS_INPUT
{
float4 Pos:POSITION0;
float4 Dif:COLOR0;
float2 Tex:TEXCOORD0;
};



Now you have to change the FVF declaration with a vertex declaration. It's too difficult to explain here, but you can look in documentation for D3DVERTEXDECLARATION9 and try to understand...

Share this post


Link to post
Share on other sites
Let me take a guess at what you're doing. You first ask D3D what pixel shader the hardware supports, then compile for that version. This is bad for several reasons, despite the number of people who suggest it.

1. (which makes me think you're doing the above) PS3.0 is tied to VS3.0. You can't have a PS3.0 shader without a VS3.0 shader.

2. Smaller shaders may actually perform faster as ps.1.1 than ps.2.0. It's recommended to use the lowest shader version that does what you need. ps.1.1 supported lots of instruction and argument operators which will take more instructions in 2.0. This is extra important because GeForceFX (5000 series) cards are, to be blunt, garbage. They run ps.2.0 slower than you probably think is possible.

3. SM3 changed lots. No fog register. No color0 or color1 interpolators (replaced by an extra 2 higher resolution texture coordinate interpolators). No support for fixed, vs.1.x, or vs.2.0 input, only vs.3.0.

Share this post


Link to post
Share on other sites
I second NTNET's post.
It's extremely important to not just to write the shaders and make them work but also know what you are doing. If a shader is implemented incorrectly it can cause severe bottlenecks and with that low framerates.

Thank you NTNET!

Share this post


Link to post
Share on other sites
Original problem is the following. I have the program which has pixel shader, and doesn't have any vertex shader. This is 2D rendering program with pixel shader used for image processing. I think this is OK and supported by Direct3D. Indeed, it works on NVIDIA, GeForce and other cards. However, with ATI Radeon HD 3600 I have the following error message printed by DirectX Debug runtime:

Direct3D9: (ERROR) :ps_3_0 shader can only be used with vs_3_0+ shader (SWVP or HWVP), or transformed vertices.

So, missing vertex shader is interpreted as vertex shader of lower version. I cannot fight with this card manufacturer, I just need to support it. This is why I want to make vertex shader which gives the same result as my existing code with SetFVF. Or any other solution, if somebody can suggest it.
Thank you.

Share this post


Link to post
Share on other sites
If you don't want to use a vertex shader with a ps_3_0 pixel shader, you'll have to use pre-transformed coordinates. This is done by using the D3DFVF_XYZRHW FVF code or a D3DDECLUSAGE_POSITIONT usage for your vertex declaration. When you specify this code, your vertex coordinates should be screen coordinates (and also offset by a value of -0.5 if you want texels to exactly line up with render target pixels).

Share this post


Link to post
Share on other sites
Right, I use pixel shader for texture and FVF code for vertices. And this doesn't work on ATI Radeon display card. This is why I want to make vertex shader.

Share this post


Link to post
Share on other sites
Why do you want to use a vertex shader if you're doing image processing? I'd assume for this you just want to render a full-screen quad that's exactly overlaid on the render-target, in which case you'd want to use pre-transformed coordinates (since you know ahead of time where you want the vertices to be positioned). You just set the screen-space coordinates manually, no need for projection matrices or anything like that.

If you really want to use a vertex shader you can just use something like this:



float4x4 matWorldViewProj : WORLDVIEWPROJECTION;

struct VS_INPUT
{
float4 vPositionOS : POSITION;
float2 vTexcoord : TEXCOORD0;
};

struct VS_OUTPUT
{
float4 vPositionCS : POSITION;
float2 vTexcoord : TEXCOORD0;
}

void VertexShader ( in VS_INPUT IN, out VS_OUTPUT OUT )
{
OUT.vPositionCS = mul( IN.vPositionOS, matWorldViewProj );
OUT.vTexcoord = IN.vTexcoord;
}




Share this post


Link to post
Share on other sites
I agree with MJP. Using pre-transformed vertices makes sense for 2D processing. Alternately, try to use ps_2_a for your shader model, which provides a lot more playing space than ps_2_0 (although less than ps_3_0 or what modern hardware will allow for PS 2.x).

Share this post


Link to post
Share on other sites
ET3D, your idea about reducing shader version works with ATI Radeon display card. I found that ps_2_0 is OK. Now I need to translate my existing HLSL shader to Assembly ps_2_0. Sharer code is:


Texture ColorTexture;

sampler ColorTextureSampler = sampler_state { texture = <ColorTexture> ; magfilter = NONE; minfilter = NONE; mipfilter = NONE; AddressU = wrap; AddressV = wrap; };

Texture PaletteTexture;

sampler PaletteTextureSampler = sampler_state { texture = <PaletteTexture> ; magfilter = NONE; minfilter = NONE; mipfilter = NONE; AddressU = wrap; AddressV = wrap; };


struct v2p
{
float4 Position : POSITION0;
float2 TexCoords : TEXCOORD0;
};


struct p2f
{
float4 Color : COLOR0;
};


void ps( in v2p IN, out p2f OUT )
{
float4 TextureColor = tex2D( ColorTextureSampler, IN.TexCoords );
float f = TextureColor.r;

if ( f == 1.00 )
{
f = 0.999;
}

OUT.Color = tex1D( PaletteTextureSampler, f );
}




This shader uses two textures. First is image shown on the screen, second is 8 bpp lookup table. I need help to translate this shader to ps_2_0 version to meet graphics card requirements.
Thank you.

[Edited by - Alex F on April 9, 2008 9:14:04 AM]

Share this post


Link to post
Share on other sites
What problem are you having? It looks like a simple shader that should work as is in ps_2_0.

Share this post


Link to post
Share on other sites
Thank you very much for your help. The problem is solved by reducing pixel shader version to ps_2_0, as suggested by ET3D. Existing shader code is translated to Assembly using fxc tool.

Share this post


Link to post
Share on other sites
Sign in to follow this