Creating a custom sprite class

Started by
8 comments, last by george7378 8 years, 5 months ago

Hi!

At the moment, I'm using a single window-sized ID3DXSprite to display a 2D texture on screen. This sprite is drawn with a pixel shader to allow for post-processing effects to be added to the texture. The issue is that my shader contains more arithmetical operations than are allowed in ps_2_0, and so I want to use ps_3_0 instead. However, it doesn't seem like ps_3_0 and ID3DXSprite can be used together (am I right about that?). As a result, I'm probably going to have to create my own sprite class, so that I can also use a custom vertex shader (compiled with vs_3_0).

I was wondering if anyone could point me to a good online tutorial, or a reference in a book, which will show me an example of a custom 2D sprite class? It doesn't have to be complicated - basically, when the program starts I'd like to create a single instance of the sprite with the same width and height as the program's window, then on each frame I'd like to render it using my custom shaders.

Thanks for the help!

Advertisement

If all you're doing with sprites is this single window sized texture then you can look into Screen Aligned Quads to do this entirely in the shader without need for a custom sprite class

You can simply have a class containing an array of a sprite struct :


struct TSprite
{
  CVector2 Position;
  CVector2 Size;
  CColor Color;
  float Angle;
};

Have a BeginSprite function which clear the array, RenderSprite which add a sprite in the array, EndSprite which render the array of sprite.

You only need to do one draw call because you generate the vertex buffer using a dynamic buffer, it's called sprite batching.

You want a Full Screen Quad. It is simply a vertex buffer with 6 points(or 4 with an index buffer) in screen space(-1 to 1). Set your shader variables including the texture you want to manipulate than draw the FSQ.

You can go even simpler: Use a single triangle and generate the vertices in the shader aka Full Screen Triangle

Hey again,
Thanks for the responses, I have created my quad class, along with a vertex shader to go with it. Here's my class:

struct ScreenQuadVertex
{
D3DXVECTOR4 pos;
D3DXVECTOR2 texCoords;
};
class ScreenQuad
{
private:
LPDIRECT3DVERTEXBUFFER9 vertexBuffer;
IDirect3DVertexDeclaration9* vertexDecleration;
public:
 
ScreenQuad(){};
 
bool CreateResources()
{
ScreenQuadVertex vertices[] =
{
{D3DXVECTOR4(-1, 1, 0, 1), D3DXVECTOR2(0, 1)}, //Bottom Left
{D3DXVECTOR4(1, 1, 0, 1), D3DXVECTOR2(1, 1)}, //Bottom Right
{D3DXVECTOR4(-1, -1, 0, 1), D3DXVECTOR2(0, 0)}, //Top Left
{D3DXVECTOR4(1, -1, 0, 1), D3DXVECTOR2(1, 0)}, //Top Right
};
 
D3DVERTEXELEMENT9 elements[] =
{
{0, sizeof(float)*0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},
{0, sizeof(float)*4, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0},
D3DDECL_END()
};
vertexDecleration = 0;
if (FAILED(d3ddev->CreateVertexDeclaration(elements, &vertexDecleration))){return false;}
if (FAILED(d3ddev->CreateVertexBuffer(4*sizeof(ScreenQuadVertex), 0, 0, D3DPOOL_MANAGED, &vertexBuffer, 0))){return false;}
 
void* pVoid;
if (FAILED(vertexBuffer->Lock(0, 0, (void**)&pVoid, 0))){return false;}
memcpy(pVoid, vertices, sizeof(vertices));
if (FAILED(vertexBuffer->Unlock())){return false;}
 
return true;
}
 
bool Render()
{
if (FAILED(d3ddev->SetVertexDeclaration(vertexDecleration))){return false;}
if (FAILED(d3ddev->SetStreamSource(0, vertexBuffer, 0, sizeof(ScreenQuadVertex)))){return false;}
 
if (FAILED(d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2))){return false;}
 
return true;
}
 
void DeleteResources()
{
SAFE_RELEASE(&vertexBuffer);
SAFE_RELEASE(&vertexDecleration);
}
};
...and here's a simple effect file which is supposed to take a texture (which is the same dimensions as the program client area) and display it on screen:

Texture ScreenTexture;
 
sampler ScreenTextureSampler = sampler_state
{
texture = <ScreenTexture>;
magfilter = POINT;
minfilter = POINT;
mipfilter = POINT;
AddressU = Mirror;
AddressV = Mirror;
};
 
struct PixelColourOut
{
float4 Colour : COLOR0;
};
 
struct ScreenQuadVertexToPixel
{
     float4 Position      : POSITION;
     float2 TexCoords     : TEXCOORD0;
};
 
ScreenQuadVertexToPixel ScreenQuadVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0)
{
     ScreenQuadVertexToPixel Output = (ScreenQuadVertexToPixel)0;
 
     Output.Position = inPos;
     Output.TexCoords = inTexCoords;
 
return Output;
}
 
PixelColourOut ScreenPixelShader(ScreenQuadVertexToPixel PSIn)
{
PixelColourOut Output = (PixelColourOut)0;
 
Output.Colour = tex2D(ScreenTextureSampler, PSIn.TexCoords);
    
     return Output;
}
 
technique ShowTexture
{
pass Pass0
     {
     VertexShader = compile vs_3_0 ScreenQuadVertexShader();
         PixelShader = compile ps_3_0 ScreenPixelShader();
     }
}
It seems to work overall, i.e. when I send in this texture:
[attachment=29511:testtex.jpg]
I get this result when I run the program:
[attachment=29512:testtexrender.png]
There are two obvious issues - first, the image is flipped about the horizontal, implying that there is something wrong with the texture coordinates I defined, and second, the texture is very blocky when displayed by my program. This shouldn't happen because even though the shader sampler uses no filtering, the client area and texture are the same dimensions, meaning that there should be a 1:1 correspondence between the pixels. Note that I used the AdjustWindowRect() function to make sure the client area of my window is the same as the texture dimensions (640x480 in this case).
Can anyone see why these things are happening from my code? I thought that HLSL used (0, 0) for the top left of the texture and (1, 1) for the bottom right, as I have coded into the four vertices of the ScreenQuad. Is this not the case?
Thanks for the help smile.png

In normalized device coordinates bottom left is (-1,-1). Also the 1:1 mapping isn't that simple for D3D9: Directly Mapping Texels to Pixels. You need to shift half a texel/pixel up and left.

Ah OK, so I need to shift the coordinates of each vertex (which I have now corrected to properly fit the NDC format) by -0.5/WIDTH in the x direction and -0.5/HEIGHT in the y direction?

Almost. The NDC range is 2, so it's -1/WIDTH and +1/Height (the latter shifting up). Personally I prefer granting the vertex shader a transformation so one can achieve this with a translation and an ortho-projection and can leave the the vertex buffer as is (for different purposes, say, other, non-fullscreen quads).

You can check if it works by enabling linear filtering and compare input and output.

Thankyou very much for the help, this has allowed me to create a working quad class :)

This topic is closed to new replies.

Advertisement