the restriction about texture

Started by
9 comments, last by jollyjeffers 16 years, 3 months ago
I know,When we use the texture,we'd better use the square texture,power of 2 and don't large the 256*256(my computer's maxtexture is 4096*4096). But,I want to know why we'd better observe this rules??? 1,square texture why?? If not.... 2,power of 2 why?? If not.... 3,maxsize why?? If not we cann't load the texture. Thanks....
Advertisement
At number 3 you answer your question already:

Several (older) cards have limits on surface sizes. If you try to create a texture outside of their capabilities the CreateTexture call will simply fail.
These limits are probably optimization imposed; it's simpler math if the textures width/height are powers of 2 and/or squared.

Now you could go along and check the CAPS struct and adjust the sizes for the CreateTexture call.

In the end it's just plain simpler to stay at the smallest denominator. Use square, 256x256 textures and they are guaranteed to work everywhere.

Downside: All newer cards run below their capabilities, they could display far bigger textures (and thusly you'd have prettier graphics).

Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>

No real reason to observe these "rules".

You can safely use 2048x2048 textures, which all cards have been supporting for several years. If you really want to play it safe as can be, use 1024x1024 or less, since some older Intel integrated chips didn't support 2048x2048. That'd be safe for virtually all cards since the dawn of time (perhaps a slight exaggeration), except for 3dfx (Voodoo) cards, and nobody is using them any longer. Note that smaller textures usually mean faster performance (they're cached better), but often using a higher resolution and a compressed (DXT) format can result in better image quality at the same performance level.

You don't have to use squares, though there's usually not much point in using 512x128, or whatever. Still, you can do that without any worries.

As for non-power-of-2, you can use such textures as long as you're aware of the limitations, as described in the docs (for D3DPTEXTURECAPS_NONPOW2CONDITIONAL, in the D3DCAPS9 page). For 2D images with predefined aspect ratio, such as splash screens, they can be the right thing to use.
Quote:Original post by ET3D
except for 3dfx (Voodoo) cards, and nobody is using them any longer.
Thankfully! Those and the GeForce MX/FX cards have been the biggest headache for me in the last 10 years.

One thing to note is that a lot of the D3D9 documentation is REALLY old. Bits like you're quoting (from here?) almost certainly pre-date the programmable GPU's and are much more relevant to the D3D7 generation hardware.

The best guidelines tend to come from the IHV's whitepapers released around the same time as the hardware generation(s) you're targetting.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Thanks very much........
Quote:Bits like you're quoting (from here?) almost certainly pre-date the programmable GPU's and are much more relevant to the D3D7 generation hardware.


Good to know, but can we tell if the newer hardware (or drivers) are just not "emulating" things? For example, maybe if you create a 300x300 texture, the driver will internally create a 512x512 texture and give you that, thus effectively removing the restrictions. What I'm trying to say is: are these no longer HARDWARE requirements for newer cards?
D3DX will by default perform the padding you describe. There are flags for D3DXCreateTextureFromFileEx() that will override this though.

As for the actual hardware, if it says it supports non power of 2 (NPOT) textures then it's as simple as that. Yes, the driver can lie but it is highly unlikely. Even if it did, theres nothing you can do about it.

The 'pitch' field when locking the raw bits is the place that the GPU/driver are allow to apply any relevant padding if it, for example, required even-number dimensions...

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Original post by Goodman
power of 2 why??

Because it's faster to handle wrapping and mirroring, to say the least. All you need to achieve wrapping is to perform an AND operation on the coordinate instead of modulus. Mirroring is simply a matter of performing Exclusive OR, instead of TEXTURE_SIZE - 1 - coordinate. Deciding whether we are in the mirrored area is achieved by simply examining the MSB (1 = mirror, 0 = not) instead of performing a division and then checking if the resulting number is even or odd.

You might want to learn about binary arithmetics if all this seems confusing. It's pretty simple actually. Hint: for a decimal number system, you'd use power of 10.

[Edited by - Cybernator on January 3, 2008 7:09:58 AM]
Quote:Original post by jollyjeffers
...

I understand that, but whether NPOT and other relatd caps are supported at the hardware level or at the software (driver) level could be a significant fact to consider when designing your app. For example, if it's supported at the hardware level, possibly with minimal additional performance and memroy overhead, then it might be better to not use a texture atlas and just have a separate texture for each... well.. texture. I suppose one could tell by examining the difference between the pitch size and the actual width of a texture for a set of dimensions. For example, if a texture created with 130 pixels in width has a pitch size of 256 pixels then it would be obvious that the driver is sticking to powers of two internally.

It is a bit surprising that this kind of information is not readily available, because like I said, this kind of knowledge could have great importance when designing an engine/app.
The driver can't emulate NPOT2-textures. If it does, it would have to pad, and then the texture coords would not fit to the image anymore, resulting in well-observable image glitches. There's no way the driver can adjust all the texture coords you put into vertex structures or output from a vertex shader so the error is not visible. If it can adjust, it's some trick in hardware - then you have working NPOT2-textures, so you don't need to worry.

It's important to pay attention to the rules defined by DX under which NPOT2 usage is allowed. Some cards support NPOT2 textures easily, others (older Radeons including the X1000 series AFAIK) trash the texture contents when violating the rules. The difference can be read from the capabilities: if D3DPTEXTURECAPS_NONPOW2CONDITIONAL is set, you need to adhere to the rules. If D3DPTEXTURECAPS_POW2 is not set and the previously mentioned flag is also not set, the card supports NPOT2 textures under any circumstances. To my knowledge, most NVidia cards do this, ATI is a bit more problematic, I have no knowledge of onboard graphics.
----------
Gonna try that "Indie" stuff I keep hearing about. Let's start with Splatter.

This topic is closed to new replies.

Advertisement