Optimizing Instancing Shader (Reducing Transfer Volume)

Started by
7 comments, last by Drakken255 10 years, 8 months ago

Hello all, long time since my last post.

I worked on a voxel engine about a year ago in XNA using hardware instancing of each cube's faces. While making it, I made a post here asking for advice (link below) on how to improve the framerate, as my code was incredibly inefficient, from doing too much unneeded update logic to simply drawing too much. During said discussion it has improved vastly, but had to put the project down without implementing the number 1 most important piece of advice I took from that post: reducing the volume of data about each instance. Following is a quote explaining how much I was using and how much I could have been using.

It sounds like your instance vertex format is something like this?


transform 16*4 -> 64 bytes
texcoord 2*4 -> 8 bytes
texcoord 2*4 -> 8 bytes
texcoord 2*4 -> 8 bytes
color 4*4 -> 16 bytes

As kalle_h mentioned, you should be able to reduce the color to 4 bytes. You can also use lower precision values for the texcoords. Using HalfVector2 for the texcoords will cut their size in half (or use the index menthod Kalle_h mentioned). These are simple changes to the vertex format, you don't need to change the shader.

For the transform, it sounds like you're passing a whole matrix? You actually only need to pass some of the matrix elements, and you can "reconstruct" the matrix in the shader. Certainly you could cut this down to 12 floats. If you only need translation, then you could cut it down to 3 floats. If you also need a uniform scale, that's only 1 more float. Rotation? Probably 4 more.

So, conservatively, you get have:
transform 12*4 -> 48 bytes
texcoord 2*2 -> 4 bytes
texcoord 2*2 -> 4 bytes
texcoord 2*2 -> 4 bytes
color 4*1 -> 4 bytes
TOTAL: 64 bytes

More aggressively, say you only need translation for your transform:
transform 3*4 -> 12 bytes
texcoord 2*2 -> 4 bytes
texcoord 2*2 -> 4 bytes
texcoord 2*2 -> 4 bytes
color 4*1 -> 4 bytes
TOTAL: 28 bytes

My goal is to take the last byte count down even further.

transform 3*4 -> 12 bytes

texcoord 1*2 -> 2 bytes
texcoord 1*2 -> 2 bytes
texcoord 1*2 -> 2 bytes
color 2*1 -> 2 bytes
TOTAL: 20 bytes

As the quote shows, I was indeed passing an entire matrix, when only the translation info was needed. I solved this easily enough by simply passing a Vector3 and rebuilding the world matrix in the shader.

Next, I want to take the next three variables (coordinates for a texture atlas, (base, overlay, and breaking)), and reduce them further than 2*2 down to a single 16 bit unsigned integer, which should allow ~65,536 different atlas coordinates. Problem is:

(1). I don't know how to turn an index into an x and a y int in HLSL code. I could do it in C# fine by just dividing the index by the number of pieces per row, storing the rounded down number as y, and subtracting the y * the number of pieces per row from the index to get x.

(2). HLSL has no 16-bit integer. Is it possible to still send a 16 bit int, but convert to a 32 bit int once it's there? Or am I just missing unlisted integral types?

Lastly, for my color, I only want to send 2 bytes. The first will represent red, green, and blue, while the second will represent transparency. How do I convert 2 bytes into 2 floats when the time comes?

All in all, I just need help in converting different data types back and forth to allow for the least needed bandwidth when I send my instance information.

For those of you that would like to view the original discussion, click this link: http://www.gamedev.net/topic/629247-face-instancing-dividing-draw-calls/

EDIT: I just realized that since I am instancing individual faces of each cube, I do also need rotation, so memory per instance will actually look like this:

transform 12*4 -> 48 bytes

texcoord 1*2 -> 2 bytes
texcoord 1*2 -> 2 bytes
texcoord 1*2 -> 2 bytes
color 2*1 -> 2 bytes
TOTAL: 56 bytes
What I don't understand is why I need to send an additional 6 bytes for a 3 axis rotation, or where in the matrix that data is, so I now also need help getting this cut down a bit.
Advertisement

Since you're working on a Minecraft clone, instead of optimizing that representation, I'll suggest an alternative approach to the problem.

- You don't want to render individual cubes. You can't see most of them anyway as many of them will be in the middle of objects.

- Your best option is probably to split the rendering into 6 passes. Each one draws one face for lots of cubes. Within those 6 passes you can have a separate draw call per texture if you need to.

Your algorithm should be approximately:

1. Determine a list of visible faces (for each cube look for empty space on each side). Note that this won't change very often from frame to frame, and changes will tend to be localized, so you can incrementally update it when something changes.

2. You can combine multiple faces into a single quad by using texture wrapping (this requires some shader maths if you're using an atlas). For example a 8x8 square of 64 faces with the same texture can be drawn as two triangles. You can also merge faces just in one dimension to simplify the merging code.

3. Most of the time one or more of the cube faces will be invisible for all cubes, so don't render those faces.

That should get your poly and draw call counts down to something manageable, without complicated vertex format optimizations.

You may find you actually want to deliberately split your vertex buffers up, adding more draw calls, so that changes don't affect too big a portion of the data.

Firstly, if this ever becomes more than a learning exercise, it will definitely not be a Minecraft clone. It will instead likely be a 3D Pocket Tanks.

I am already instancing individual faces, and my game detects whether a face is hidden by others. So it is only drawing what can be seen, except for faces behind others. I haven't quite figured out occlusion culling yet, but that is a secondary concern.

The entire purpose of my post was to ask how to effectively reduce graphics transfer bandwidth on a per-instance basis first, and then reduce the number of instances.

EDIT: I figured out how to reconstruct translation and rotation matrices, so all I need to pass per-instance is 3 floats for position, and 3 floats for rotation. Now, it's just a matter of whether or not a ushort will be properly converted to a regular int by the shader.

Build bigger static chunks and forget about instancing. That should reduce cpu to gpu bandwith close to zero.

Let it be known that all I'm asking is how to write my shader and other relavent graphics code to send as little data per-instance as possible. I'll worry about reducing instances on my own.

That being said, I need to know how to send four (4) unsigned short integers to the GPU, considering HLSL has no 16 bit integer type.

Typically if you want a compact orientation you'll use a quaternion represented as 4 normalized, 16-bit values. It's been a very long time since I did any XNA, but it should support doing this. Looking at the documentation, you'll want to use a vertex element with format set to VertexElementFormat.NormalizedShort4. When you use this format, your vertex buffer will need to contain 4 short values of type "short" (AKA System.Int16) where 0 represents a floating point value of 0, 32767 represents 1.0, and -32767. Then when the shader reads these values, it will automatically convert from integers to floating point. This makes it very convenient for data thats of the range [-1, 1], which is the case for a normalized quaternion. To fill your vertex buffer you can generate the short values yourself with something like "short x = short(orientation.x * 32767.0f)", or you can use PackedVector.NormalizedShort4 to do the conversion for you.

Note that if you use one of the non-normalized formats (Short2, Short4, Byte4) then the shader would instead get the raw integer value cast to a floating point value, since DX9 doesn't support integer instructions in shaders.

Thanks for the tip, but I decided to simply build and pass 6 rotation matices to the shader before even sending the instances, since a cube face can only have 6 rotations. So now it's just a matter of correctly reducing the size of my atlas index information. Should I be able to apply a similar technique since my atlas indices are unsigned shorts? Even if I have to pack 2 shorts into one 32 bit integer, and unpack them to 2 32 bit unsigned ints in the shader, that's fine. I just don't know what combination of options will allow me to correctly send 4 unsigned shorts.

Yeah you'll want to use Short4. That way your shader will get the value as 4 floats, which you can use as indices.

Many thanks, can I get an example in the shader? I can't seem to get it working. Also, translations are held in the bottom row of matrices, yes? Because I build each instance's translation matrix like so:

float4x4 instanceTrans = {1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
trans[0], trans[1], trans[2], 1};

and when I multiply that with my rotations, things get weird. But if I don't, then the faces face up as expected.

This topic is closed to new replies.

Advertisement