XNA Effect tex vs tex coord change

Started by
6 comments, last by NewtonsBit 13 years, 3 months ago
This is one of those "which is faster/better/easier to implement" questions

I was kinda trying to think it through but I am fairly new to 3D programming so I don't want to make any assumptions.

So I have a bunch of textures I'm going to use in an animated quad. I am considering the following options:

1) Create a picture for each frame and then load each into a seperate Texture2D object (One animation to many textures, One texture to one frame)
2) Create a single picture (as close to a square as possible given the dimensions of each frame) and change the texture coordinates everytime I need to draw a different frame (One animation to one texture, One texture to many frames)

I want to optimize this as much as possible without going low-level.

It is my understanding that updating my quad's vertices means I have to resend the values of all related vertices to the graphics device (even if it's just the T-Coordinates that are changing). Is there some available mechanism to just update certain values of certain vertices? Or will making a vertexbuffer be completely superfluous if I go with (2)?

The only reason I don't want to do (1) is because I believe there may be some overhead in changing the Effect's Texture variable every time the frame changes, and also it is my understanding that this will be a huge waste of memory (especially on certain cards that limit texture sizes to square powers of 2).
Advertisement
If you change textures a lot during the same frame, and your textures don't repeat across a surface, then packing many textures into one is a good optimization. This is used extensively when drawing 2D sprites and similar. If you have a 3D map where a surface or model is animated however, and you only change the animation in between frames, then it's probably easier and faster to just change which texture object you bind the next frame. Of course this can vary, depending on your texture size and how many textures there are. If you animate a single quad with a thousand very small textures, then it might be better to put them all into one texture.
Is there anyway to use to Vertex Buffers with these vertices or will the fact that the texture coordinates fluctuate mean that I will need to retransmit these coordinates so often that it would be unnecessary?
I'm going to assume that you're planning on drawing a very large number of these. You'll find that your limitations will be in sending the texture to the GPU over and over again for different frames that exist on different objects. It's generally preferable to set up your effect by setting up general parameters (such as the render texture) and then drawing each object. The only parameters that should change would be things like the world matrix.

I'm currently drawing several hundred objects comprised of a static vertex buffer, a dynamic vertex buffer and two dynamic index buffers. Each object is comprised of 24,000 vertices. I see no performance changes in modifying a set of 24 vertices in an update call. Hell, there's only a bump of about 3ms when completely rebuilding an index buffer.

If you're only drawing a couple of objects, then don't worry.
Quote:Original post by FenixRoA
Is there anyway to use to Vertex Buffers with these vertices or will the fact that the texture coordinates fluctuate mean that I will need to retransmit these coordinates so often that it would be unnecessary?


You could always have one vertex buffer for each frame. If you're rendering it a bunch of times and have some memory to spare it could be a good solution. Or use separate streams for position data vs. texture coordinates.
Quote:Original post by FenixRoA
Is there anyway to use to Vertex Buffers with these vertices or will the fact that the texture coordinates fluctuate mean that I will need to retransmit these coordinates so often that it would be unnecessary?


It's probably unnecessary. If you're just rendering a quad then there won't be very many bytes transferred anyway.
@NewtonsBit:

Streams? How do they work?

I think I'm liking this seperate vertexBuffer for each frame idea, and I don't think it will be too expensive for my needs, but I'm going to try to research how streams are used in XNA. If you have any pointer material, please pass it along.
Quote:Original post by FenixRoA
@NewtonsBit:

Streams? How do they work?

I think I'm liking this seperate vertexBuffer for each frame idea, and I don't think it will be too expensive for my needs, but I'm going to try to research how streams are used in XNA. If you have any pointer material, please pass it along.


This has changed from XNA 3.1 -> 4.0, but this is how I understand it in 4.0:

In essence, you send two (or more) vertexbuffers to the GPU at one time. These two buffers streams are combined together for the vertex shader.


With 4.0, here's my draw code:
            // RENDER TERRAIN            foreach (Terrain terrain in TerrainManager.Terrains)            {                effect.Parameters["LightWorldViewProj"].SetValue(terrain.WorldMatrix * lightViewProj);                effect.Parameters["WorldViewProj"].SetValue(terrain.WorldMatrix * renderViewProj);                effect.Parameters["World"].SetValue(terrain.WorldMatrix);                foreach (EffectPass pass in effect.CurrentTechnique.Passes)                {                    pass.Apply();                    device.Indices = terrain.ClipIndexBuffer;                    device.SetVertexBuffers(terrain.VertexBufferPosition, terrain.VertexBufferColorNormal);                    device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, terrain.VertexCount, 0, terrain.IndexCount / 3);                }            }


In the device.SetVertexBuffers call I've used two VertexBuffers. VertexBuffers already contain the VertexDeclaration so it is easy peasy. One buffer has the Position data (which is static) the other has the Color and Normal data (which can change).

This topic is closed to new replies.

Advertisement