Creating a GUI with Direct3D

Started by
13 comments, last by MasterWorks 19 years, 7 months ago
So, so far for my methods of how to render non square images to the screen easily but still supporting graphics cards that only support square power of two textures are:

1) Create textures the and then scale them in something like Photoshop to a supported size (e.g 32x32) and save them again. Then, when they are rendered to a quad of the correct size they'll be scaled back down to the right size.

Drawbacks: Possibly a pain for the artist and may also mangle the image during resizing up and back down.

2) Store the images normal size (e.g 20x10) and allow D3DX functions to scale them up to fit in a supported texture e.g 32x32. Then when they are rendered to a quad of the original size they will be scaled down to look right again.

Drawbacks: May also mangle the image during resizing up and back down.

3) New idea: Maybe have several different GUI components in one file that is a square power of two size. (i.e like what is often done with skin engines to store elements) and use the D3DXSprite interface which allows you to render a portion of a texture to the screen at once.

Drawbacks: relying on the D3DXSprite interface. (Does anyone know how I could do this without using the Sprite interface?) How to render a specified portion of a texture to a quad.

Advertisement
There must be a standard way that a lot of game developers do this... >_< lol
#3 is the solution that I use, and I believe what the Microsoft sample SDK uses as well.

Just put all your GUI components into one 256x256 texture.

Use texture coordinates to represent each textures/sprite component you will be drawing from on the texture. For instance, your up scroll buton may be represented by the Rectangle(0.0, 0.0, 0.15, 0.15).

This has the added advantage of being extra fast as to draw all your gui components you only need one texture state change. You could also make one vertex buffer for all your relevant GUI components.

BTW, you do *not* have to use the ID3DXSprite interface to do this. In fact, you pretty much can use texture coordinates with any Direct3D implementation. You'll notice that most of the sample sprite classes in Direct3D use texture coordinates? By changing those coordinates, you change what part of the source image you are drawing from.
Co-creator of Star Bandits -- a graphical Science Fiction multiplayer online game, in the style of "Trade Wars'.
With regard to irregular size GUI elements (or any other graphic), here is the strategy I use to get good results on non-power-of-two images:

1. Let's assume you're making a game element that you want to be 120x40. Your final power-of-two image is going to be 128x64, which is the next cheapest size available to you.

2. Create your actual graphics in (Photoshop/etc) using the correct aspect ratio, except make them larger to reflect the aspect ratio. In the example, your image needs to be AT LEAST 192x64 so that the final step of resizing to 128x64 does not cause upsampling in EITHER dimension. (Instead, we have 192 columns that will be compressed into 128, and 64 rows that will not be compressed at all.)

Always perform editing, adding text, working with layers, etc. on the full-size 192x64 image. Exporting into a game-usable texture is always the last step.

3. Resize (bicubic!!) the 192x64 image to 128x64, obviously you'll have to disable 'Preserve Aspect Ratio.' The image appears all squished up/distorted for now...

4. Respect the original aspect ratio when generating your vertices so the texture appears correct when rendered.

For me this process has had good results; since your textures have to be power of two, you might as well pack as much detail into them as possible. There should be minimal 'mangling' of the texture as it is resized, since there is never an upsample.

Obviously the method still isn't perfect, as your image will still be resampled twice: once in Photoshop (a high quality downsample) and once at run-time (HOPEFULLY a downsample!) Realistically, though, pixel-perfect results are basically impossible to obtain since your game could be running in any number of resolutions.

Note that this system basically assumes you have one texture per image file, as I prefer for simplicity. The drawback is that you can't render vertices in a 'batch' since every graphic requires switching to a new texture.

An alternative to this approach is the system with many GUI elements packed as efficiently as possible into a single texture (or multiple textures.) This has the obvious benefit of requiring fewer SetTexture calls, but the speed hit should be relatively minor if your GUI isn't ridiculously complicated.

Note that in a well designed D3D app the in-game units (120x40, etc) are resolution independent, but this doesn't affect the process. You should make your graphics even larger if you want additional detail when playing in high resolutions, such that the rasterization never involves upsampling your texture.

-Jay
One more thing with regard to D3DPTEXTURECAPS_NONPOW2CONDITIONAL:

I wouldn't be surprised if some drivers that support this may just create an image that's too large and leave a lot of it blank. If that's the case, you're much better off doing things on your own! I have no evidence for that, just a thought.

-Jay

This topic is closed to new replies.

Advertisement