Best way to use textures

Started by
5 comments, last by trick 17 years, 1 month ago
I had read somewhere that assigning textures often is a quick way to decrease performance. This brings me to a few questions, and I'm hoping someone here might be able to help. 1) Is the above even true? 2) Given the option of looping through an array of objects once, assigning each texture as you go (possibly assigning a texture multiple times), or looping through the array multiple times, each looking for a specific texture (so that each texture is only applied once), which would be faster (and is it a significant difference)? I question this one due to the amount of if statements that could be taking place, depending on the size of the array and how many total textures you have. 3) How viable is it to have many "textures" on the same image, such that you apply a texture once, and just use various UV coords for your polygons? Or, a better question, would this be a good way to avoid the performance decrease of assigning textures? (ie: an image that contains many different smaller images, so you only apply that texture one time and each object knows which part of the texture it needs). If it matters, I am using VC++ 6.0, and DirectX9. Also, I hear a lot of talk about running a program though a profiler to find the biggest reasons for performance-decreases. Can anyone tell me where to find these profilers at?
Advertisement
As a rule of a thumb you should send as few calls from CPU to GPU as possible. It could be a good thing to merge textures from many objects into one big map. This is a dependable method called texture atlas/atlasing. NVIDIA has a free software named NVPerfHUD that analyses your performance pretty good.
Quote:Original post by trick
I had read somewhere that assigning textures often is a quick way to decrease performance. This brings me to a few questions, and I'm hoping someone here might be able to help.

1) Is the above even true?

Yes, binding textures can be a significant performance hit.

Quote:2) Given the option of looping through an array of objects once, assigning each texture as you go (possibly assigning a texture multiple times), or looping through the array multiple times, each looking for a specific texture (so that each texture is only applied once), which would be faster (and is it a significant difference)? I question this one due to the amount of if statements that could be taking place, depending on the size of the array and how many total textures you have.

I've used this technique with great success on multiple occasions. YES it is a significant difference! On a "simple" test case, my FPS jumped from ~40-60 to 100+.

Quote:3) How viable is it to have many "textures" on the same image, such that you apply a texture once, and just use various UV coords for your polygons? Or, a better question, would this be a good way to avoid the performance decrease of assigning textures? (ie: an image that contains many different smaller images, so you only apply that texture one time and each object knows which part of the texture it needs).

Perfectly viable. It isn't really for the performance, though, as each polygon that needs it will have to have it bound. It's useful for managing texture memory more effectively. Squeezing two non power-of-two textures into one power-of-two texture will let the GPU (or system) memory store it more effectively, leaving more room for other textures.

Quote:If it matters, I am using VC++ 6.0, and DirectX9. Also, I hear a lot of talk about running a program though a profiler to find the biggest reasons for performance-decreases. Can anyone tell me where to find these profilers at?

NVidia makes NVPerfHUD, but a simple timer, some start( )/stop( ) calls, and a printf( ) usually work fine for me. [smile]

Sounds like you're on the right track. The easiest way to tell what is faster, though, is to time the different versions and see who wins. As in most things, there's no definitive answer. Best of luck!
-jouley
Quote:Original post by trick
If it matters, I am using VC++ 6.0, and DirectX9. Also, I hear a lot of talk about running a program though a profiler to find the biggest reasons for performance-decreases. Can anyone tell me where to find these profilers at?
Slightly off topic here - Why are you using VC6? It's 10 years old, it's not supported by any DirectX SDKs that have come out in the last year and a bit (Last one was Dec 2005 I think), it's less standards compliant than VC 2005, there are several known issues with it, and I don't know of any profilers that support it.

Visual Studio 2005 Express is free and at least as good as VC6.
I use it because I got it for free. I haven't looked into upgrading due to the cost, but I only just recently heard about Visual Studio 2005 Express. I've been wondering what the difference is between it (being free) and the latest that you might buy. I just haven't taken the time to look into it yet.
Quote:Original post by trick
1) Is the above even true?

Yes. Setting a texture, shader or vertex buffer is one of the most costly operations you're likely to perform several during a frame. But don't let this govern your life: upping from three to ten texture changes per frame will not affect your frame-rate. However, a general increase by a factor of three will do some damage when each of several hundred entities is setting its own texture. For this reason, you should sort the geometry so that it is rendered roughly in groups sharing the same texture/shader.

Depending on what you read, you'll be told to optimise in different ways; usually incompatible ones. It takes a bit of experience to be able to strike a good balance, but with a little practice and careful thought, you should be able to render vertex-cache-optimised geometry in a render-state-change-friendly manner.

Quote:2) Given the option of looping through an array of objects once, assigning each texture as you go (possibly assigning a texture multiple times), or looping through the array multiple times, each looking for a specific texture (so that each texture is only applied once), which would be faster (and is it a significant difference)? I question this one due to the amount of if statements that could be taking place, depending on the size of the array and how many total textures you have.
Neither sounds great. The optimal solution would be to sort the array at load-time (and keep it sorted as it is modified) so that rendering it in one pass requires minimal state-changes.

Quote:3) How viable is it to have many "textures" on the same image, such that you apply a texture once, and just use various UV coords for your polygons? Or, a better question, would this be a good way to avoid the performance decrease of assigning textures? (ie: an image that contains many different smaller images, so you only apply that texture one time and each object knows which part of the texture it needs).
It's very viable. The technique is known as atlas-texturing. There are pros and cons. The pros are obvious, the cons less so. Notably, texture filtering will cause the textures to 'bleed' into one another at the edges. For this reason, you're strongly advised to leave a few pixels of 'gutter' between separate texture elements. Also, texture-clamping becomes near-useless when the edges of the polygons aren't mapped to the edges of the textures.

Another often-overlooked way of minimising texture changes is to use up as many texture levels as you have available to you. This can be a little bit of a pain, as hardware support varies, but if you consider that many recent cards have sixteen texture-levels, it's worth the investment.

Quote:If it matters, I am using VC++ 6.0, and DirectX9. Also, I hear a lot of talk about running a program though a profiler to find the biggest reasons for performance-decreases. Can anyone tell me where to find these profilers at?

The topic goes quite deep (see this page), but you can get the salient data without too much effort. Get hold of PIX (in the DirectX SDK) for profiling Direct3D calls, and, depending on your processor, Intel VTune or AMD CodeAnalyst will take care of general application- or system-wide profiling.

Admiral
Ring3 Circus - Diary of a programmer, journal of a hacker.
Thanks for the help all!

I downloaded the NVPerfHUD and tried to run my app through it, but I had some problems. The program crashed instantly. I thought maybe it was because I was running a debug version through, so I tried to recompile my program in release mode but got the following errors.

inMain.obj : error LNK2001: unresolved external symbol _D3DXCreateTextureFromFileA@12
ZTemplate.obj : error LNK2001: unresolved external symbol _D3DXCreateTextureFromFileA@12
WinMain.obj : error LNK2001: unresolved external symbol _Direct3DCreate9@4
Release/block_buster.exe : fatal error LNK1120: 2 unresolved externals


For my current project, it really isn't important. I'm building a very basic game that could be done in 2D, though I'm using 3D instead. (I also heard a rumor that with the 3D acceleration in todays hardware, 3D would run faster than 2D). Anyhow, I'd like to figure this out either way even though performance drops won't make any significant difference (more of a learning exercise than anything). To me, it seems safe to assume that the errors I get when compiling in release mode may have something to do with why the NVPerfHUD crashes my program when I try to run it through. The problem is I don't understand why my prog. would compile fine in debug but give the above errors in release.

system info:
VC++6.0
DirectX9
WindowsXP
RadeonX1600 video card


EDIT: Okay, so I guess I understand why I can't use NVPerfHUD (using radeon video card). I still wonder about the release compiling problems though...

This topic is closed to new replies.

Advertisement