Jump to content
  • Advertisement
  • entries
    503
  • comments
    1888
  • views
    335594

D3DX stops grey hair

Sign in to follow this  
EDI

101 views

So, i have the Frame and Sheet objects basically set up, and now comes the 'meat' of it.

as we said before, each frame could potentially exceed valid texture sizes, and on devices that dont support non-power-two textures this could result in phenominal waste of precious on-board ram, and cause some pretty nasty resource-fighting.

so in an effort to reduce this, we decided that certain large images will be cut into smaller parts, our maximum 'part size' is 256x256, these are optimal size, and a good chunk of space for most of what we are dealing with.

getting images into chunks however is not super easy, and requires a bit of thought so we can squeeze every drop of performance we can.

The super easy approach:

the easiest approach is to simply cut every image with a dimension larger than 256 into multiple 256 images, which this is not a terrible plan, it can have some devistatingly wasteful problems.

example:


this is isnt a very extream case, but still it's a nice chunk of ram that is going to waste.


A better approach:

now on some devices the above scenario cannot be avoided, this is why it is always a good idea to keep your images within the 'nice power of two' boundries, however most hardware can do textures that have a different power of two on each dimension.

so instead of 256x256, it could perhaps do, 128x256.

now in the above case you might notice that our image only takes up 256x256 + 128x256, this means, if the device supports it, we could use a 128x256 sized texture instead of the a second 256x256 one, saving any and all waste in this case.

if the piece was smaller? how about a 32x256 piece, if the hardware can do it, and it will fit, it is probably a good idea to use these smaller pieces to 'cap off' the remainder of nice sized texture blocks.


So how am I supposed to know what sizes it can use smarty:

well we could do lots of device caps checking and probably some pre-calculation tables, but in my old age I've gotten too lazy for that. =)

D3DXCreateTexture, is a wonderful function; you can pass to it parameters of a texture you want, and it can modify them to what the hardware can handle.

so, if we have our nice little weirdo overhang piece of 53x200, we can feed that into it, and if the device supports it, we should get back a 64x256 sized texture, and if it doesnt we will get back a 256x256 sized texture, again, that is wasteful, but there isnt much we can do otherwise (expect for breaking it into 64x64 sizes, but using such a method will create far more texture switches when rendering, which is a bad idea)


so there, D3DX is going to do the work for us, how about that?

when drawing this final image, we need only set up some quads to properly match the slices we have, and then render them out by texture.


A final word on this method, the 'slice image up into parts' method works okay in a few instances, the times when it does not work all that great is when you do a lot of 'weird' scaling and linear filtering on the draw, this can create visible seams, which are present in Morning's Wrath on certain hardware at certain resolutions, so keep that in mind. =)

Sign in to follow this  


4 Comments


Recommended Comments

absolutely not =)

we decided to go with Direct3D because windows is our target platform first and foremost, and I feel it is better supported under windows than OGL.

Share this comment


Link to comment
What we've been doing on the current project I'm working on is any time we have wasted space like that we try to fill it up with any smaller images, mostly interface icons. Then we have a text file that's loaded in that stores the file each image uses and what section of that texture it uses.

Hope that can help you out.

Share this comment


Link to comment
I assume you've considered this technique for packing lightmaps? Sounds to me like a better, more general solution that what you're doing. Basically a BSP tree for image data. It's my personal preference to handling "image soup." Maybe that's what you're already doing and I'm just not getting it from your examples?

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!