Shaders and animated sprites

Started by
5 comments, last by Michael Anthony Wion 12 years ago
Is it possible to load a single (large) texture into a shader and only pass in the rectangular coordinates to define a single frame of animation in that image?
Or would I need to load several (small) separate textures, and pass those into the shader?
If it is possible, what would I need to know in order to achieve this?
Advertisement
Absolutely. You can just adjust your texture coordinates to only map a single frame of the animation to your quad. You can also use a texture array, which allows you to just specify an index to choose which frame you want to draw.
For the sake of brevity, how would this be structured in HLSL? I've had some troubles stabbing at it.
Using the basiceffect file I have (which fully textures a quad in DX10) in nVidia's FX Composer, I can't even get the full texture to show, so I'm not quite sure how to test if what I'm doing is right or not.
Perhaps my lack of HLSL knowledge is the culprit... but since my only aim in using shaders is for 2D animated sprites, I feel like I'm trying too hard to understand all of it.
And just about every online HLSL resource I find is about 2D lighting or something more 3D-related, not simple 2D texturing.
So if anyone could point me in the right direction, I'm sure I could walk on my own from there.

Much appreciated!
If you're using DX10, you can just use the ID3DX10Sprite class to render sprites. However if you'd like to understand how it works I can try to explain it for you.
From what I understand, the sprite class in DX9 had a serious memory leak and I can't be sure that this was fixed in DX10.
Also, it doesn't make much use of the GPU in the way that shaders do (I realize that it works similarly, but just not as fast).
I need to utilize the shader model because I will be rendering around 1,000 or more animated sprites per frame, and could use the extra CPU power.
And I can't use instancing since each sprite will have it's own unique properties (such as direction velocity and speed, and which frame to render based on the direction they are facing).

I'd love to know how it works!
From what I know so far, you need to specify a technique (with at least one pass) which sets the vertex and pixel shader to their respective shader functions.
And I need to define a global texture and sampler state variable, to be referenced from the pixel shader.
The vertex shader should handle the vertices which compose and transform the quad in world space (right?)
What I don't quite understand is how this information actually gets passed to the GPU, and what semantics are actually for.
What I mean is, "float4 pos : POSITION" seems redundant to me since I've already declared the variable type and name before "POSITION".
Finally, I don't understand how I would set the texture coordinates to define only a single frame of animation, unless this is actually done in the vertex shader?
The DX10 sprite class will actually handle batching and instancing, and also uses shaders (it's pretty much impossible not to use them in DX10 to draw anything). It can batch together multiple sprites as long as they use the same texture, which means different animation frames are ok. I'm not sure what you're doing that requires velocity in the shader, so you'd have to explain that further.

Semantics are used to match vertex shader inputs to the individual elements of a vertex inside of a vertex buffer. When you want to use a vertex buffer with a vertex shader, as part of doing that you need to create an input layout. To do this you supply an array of D3D10_INPUT_ELEMENT_DESC structures, with one for each element in your vertex buffer (with an element being a position, texture coordinate, color, normal, etc.). Part of that element description is a semantic string. The input assembler stage uses that semantic string to match vertex elements to your vertex shader input, which makes sure that you get the data that you want in the shader.

It's possible to generate the correct texture coordinates for a frame of animation in the vertex shader. You would need some info passed in indicating where the frame is on the texture (usually you use an xy offset plus a width and height) so that you could generate the proper texture coordinates. As an example, this is a stripped-down version of the vertex shader I use to render sprites:

cbuffer VSConstants : register (b0)
{
float2 TextureSize;
float2 ViewportSize;
float4x4 Transform;
float4 Color;
float4 SourceRect;
}

struct VSInput
{
float2 Position : POSITION;
float2 TexCoord : TEXCOORD;
float4x4 Transform : TRANSFORM;
float4 Color : COLOR;
float4 SourceRect : SOURCERECT;
};

struct VSOutput
{
float4 Position : SV_Position;
float2 TexCoord : TEXCOORD;
float4 Color : COLOR;
};

VSOutput SpriteVS(in VSInput input)
{
// Scale the quad so that it's texture-sized
float4 positionSS = float4(input.Position * SourceRect.zw, 0.0f, 1.0f);

// Apply transforms in screen space
positionSS = mul(positionSS, Transform);

// Scale by the viewport size, flip Y, then rescale to device coordinates
float4 positionDS = positionSS;
positionDS.xy /= ViewportSize;
positionDS = positionDS * 2 - 1;
positionDS.y *= -1;

// Figure out the texture coordinates
float2 outTexCoord = input.TexCoord;
outTexCoord.xy *= SourceRect.zw / TextureSize;
outTexCoord.xy += SourceRect.xy / TextureSize;

VSOutput output;
output.Position = positionDS;
output.TexCoord = outTexCoord;
output.Color = input.Color;

return output;
}


So in my shader "SourceRect" has the offset in XY and the width/height in ZW, both in texel units. So if you had a 512x256 texture with two frames side-by-side, you would use a SourceRect of (0, 0, 256, 256) to draw the first frame and (256, 0, 256, 256) to draw the second frame. Or like I mentioned earlier you can also use a texture array if you want, in which case you would only need to pass an index to your pixel shader to know which frame to use. However this is a little more advanced.

This shader only works for drawing one sprite at a time. If you wanted to batch lots of sprites using the same texture, you could use instancing and pass the SourceRect + Transform + Color through a second vertex buffer and then have your shader access them as vertex inputs rather than through a constant buffer. Alternatively, you could have an array of such data in your constant buffer and use SV_InstanceID to get the index to use.
Thanks for the explanation! I have a much better understanding of shaders now.
Maybe once I get this working 100%, I'll pm you asking about the texture array method.
That is, if I can't figure it out on my own.

This topic is closed to new replies.

Advertisement