as we said before, each frame could potentially exceed valid texture sizes, and on devices that dont support non-power-two textures this could result in phenominal waste of precious on-board ram, and cause some pretty nasty resource-fighting.
so in an effort to reduce this, we decided that certain large images will be cut into smaller parts, our maximum 'part size' is 256x256, these are optimal size, and a good chunk of space for most of what we are dealing with.
getting images into chunks however is not super easy, and requires a bit of thought so we can squeeze every drop of performance we can.
The super easy approach:
the easiest approach is to simply cut every image with a dimension larger than 256 into multiple 256 images, which this is not a terrible plan, it can have some devistatingly wasteful problems.
this is isnt a very extream case, but still it's a nice chunk of ram that is going to waste.
A better approach:
now on some devices the above scenario cannot be avoided, this is why it is always a good idea to keep your images within the 'nice power of two' boundries, however most hardware can do textures that have a different power of two on each dimension.
so instead of 256x256, it could perhaps do, 128x256.
now in the above case you might notice that our image only takes up 256x256 + 128x256, this means, if the device supports it, we could use a 128x256 sized texture instead of the a second 256x256 one, saving any and all waste in this case.
if the piece was smaller? how about a 32x256 piece, if the hardware can do it, and it will fit, it is probably a good idea to use these smaller pieces to 'cap off' the remainder of nice sized texture blocks.
So how am I supposed to know what sizes it can use smarty:
well we could do lots of device caps checking and probably some pre-calculation tables, but in my old age I've gotten too lazy for that. =)
D3DXCreateTexture, is a wonderful function; you can pass to it parameters of a texture you want, and it can modify them to what the hardware can handle.
so, if we have our nice little weirdo overhang piece of 53x200, we can feed that into it, and if the device supports it, we should get back a 64x256 sized texture, and if it doesnt we will get back a 256x256 sized texture, again, that is wasteful, but there isnt much we can do otherwise (expect for breaking it into 64x64 sizes, but using such a method will create far more texture switches when rendering, which is a bad idea)
so there, D3DX is going to do the work for us, how about that?
when drawing this final image, we need only set up some quads to properly match the slices we have, and then render them out by texture.
A final word on this method, the 'slice image up into parts' method works okay in a few instances, the times when it does not work all that great is when you do a lot of 'weird' scaling and linear filtering on the draw, this can create visible seams, which are present in Morning's Wrath on certain hardware at certain resolutions, so keep that in mind. =)