• Advertisement

Archived

This topic is now archived and is closed to further replies.

Texture Sizes?!

This topic is 5807 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys.. To start with, I''ll quote from NeHe''s tut #6: "The image height and width MUST be a power of 2. The width and height must be at least 64 pixels, and for compatability reasons, shouldn''t be more than 256 pixels. If the image you want to use is not 64, 128 or 256 pixels on the width or height, resize it in an art program." Why does the texture have to be to the power of 2? And keep it between 64 and 128 pixels in width or height? I made a program where the images i used were 16 by 16, and it seemed fine! Thx, -J

Share this post


Link to post
Share on other sites
Advertisement
All images sizes are, within the power of 2 limitation, supported up to the maximum for your hardware. Which obvioulsy varies from card to card ... here are some samples:

3dfx vodoo, banshee, vodoo 2, vodoo 3 = 256 * 256
3dfx vodoo 3 & 4 = 2048 * 2048
TNT = 1024 * 1024
Geforce 2048 * 2048
Geforce 3 = 4096 * 4096
Radeon = 1024 * 1024
Radeon 8500 1024 * 1024 or 2048 * 2048 (depending on drivers)

Share this post


Link to post
Share on other sites
If you want to make sure your textures show up on old 3dfx cards (Voodoo 1, 2 and 3), don''t let them get above 256x256.

Textures can be ANY size though - you just need to mipmap them using something like

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);

Check out the texture loading code in Lesson 7. A texture with dimensions that isn''t a typical size will only show up if you tap "F" twice - because that''s when it switches to the mipmapped textures.

_________
"Maybe this world is another planet''''s hell." -- Aldous Huxley

Share this post


Link to post
Share on other sites
>>Why does the texture have to be to the power of 2? And keep it between 64 and 128 pixels in width or height? I made a program where the images i used were 16 by 16, and it seemed fine!<<

16 IS a power of 2

pwers of 2 are
1 (not a power of 2? but u can use it)
2
4
8
16
32
64
128
etc


http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
>>Textures can be ANY size though - you just need to mipmap them using something like
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);<<

not correct they still have to be powers of 2 gluBuildMipmaps though will take say an image of size 123x432 + resize it to a power of 2 eg 128x256

also the voodoo3 can only do textures of maximum size 256x256 + also the voodoo1,2,3 have a problem of not accepting textures that are more than 8x wider than they are high
eg 4x32 is OK but 4x64 is not

http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
1 is a power of 2, zedzeek. 20 = 1.

Anyway, I''m pretty sure the minimum size is 16x16. Don''t quote me on that though.

Share this post


Link to post
Share on other sites
The minimum size is 1x1.

As to the fact that TNT runs 4096x8192 - that must be the drivers doing ''something''. Nvidia drivers on TNT only official report the sizes I quoted.

Share this post


Link to post
Share on other sites
Why can't some one just make a 3d card that likes any texture size! Oh well. How do you guys get around this restriction then? Do you just change the texture co-ords?

Edited by - jason2jason on February 24, 2002 6:12:03 AM

Edited by - jason2jason on February 24, 2002 2:09:33 PM

Share this post


Link to post
Share on other sites
The power of two limitation is not a hardware limitation. It's written in the OpenGL specification that texture cannot have dimensions other than powers of two.

For GeForce hardware, and maybe the TNT series, don't know about them, you can use the NV_texture_rectangle extensions. Then you can use texture of any size, including non power of two. The only real restriction is that you can't mipmap those textures.

Edited by - Brother Bob on February 24, 2002 6:18:47 AM

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Actually, you don''t NEED to have power of 2 texture on OpenGL (and with out using the nVidia texture rect extension too).

Consider the following:
  
typedef struct
{
unsigned int id;
unsigned int width;
unsigned int height;
float scaleS;
float scaleT;
unsigned char channels;
unsigned char depth;
void *buffer;
} ImageData;

static unsigned short MakePowerOfTwo (unsigned short number)
{
unsigned short power2 = 1;

while (number > power2)
power2 = power2 << 1;

return power2;
}

//Upload the texture like this:


ImageData Image;

Image.scaleS=(float)(width/MakePowerOfTwo(width));
Image.scaleT=(float)(height/MakePowerOfTwo(height));

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, MakePowerOfTwo(width), MakePowerOfTwo(height), 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, Image.buffer);


Then after that, just multiply all of the 3D data''s texture coords by Image.scaleS and .scaleT

This might be a bit over everone''s head, but it''s a cheap way of getting around that power of 2 limit (and it works with mipmapping, but this example doesn''t do it).

Share this post


Link to post
Share on other sites
Yes, that works, but only if you keep the texture coordinates within the range [0, Image.scale{S|T}]. If you want to repeat the texture, it won''t work anymore.

Share this post


Link to post
Share on other sites
the origin : there is no way you can load 4096*8192 texture on TNT(2). Excpt if driver scales it down before it loads it.

4096*8192*RGBA = 128mb!!!

There are more worlds than the one that you hold in your hand...

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster
Actually, you don''t NEED to have power of 2 texture on OpenGL (and with out using the nVidia texture rect extension too).



If you look closely (or not) youll see that you are still uploading the image as a power of 2. You are just resizing it before uploading it. Thats not getting around the ''power of 2'' specification at all, because the image you are handing to ogl DOES have dimensions which are powers of 2.

-----------------------
"When I have a problem on an Nvidia, I assume that it is my fault. With anyone else''s drivers, I assume it is their fault" - John Carmack

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
brother bob!

thank you!

Share this post


Link to post
Share on other sites
>>1 is a power of 2, zedzeek. 20 = 1.<<

yeah my bad (i did write a ? though cause i had doubts)

>>For GeForce hardware, and maybe the TNT series, don''t know about them, you can use the NV_texture_rectangle extensions. Then you can use texture of any size, including non power of two. The only real restriction is that you can''t mipmap those textures.<<

tnt''s hardware r not capable of non power of 2 textures geforces are (but there are many restrictions with this extension eg no mipmaps)

http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
> The power of two limitation is not a hardware limitation

Wrong. It IS a hardware limitation. It has do to with the texel access circuitery in the GPU, since it is a lot faster to calculate memory addresses when the texture is a power of two. In recent hardware (GF2+) this limitation is (partially) gone, but only on very special textures: no mipmapping, restricted texture wrapping modes, no 3D textures, etc. To sum it up: it''s not worth for 3D. texture rectangles are nice for 2D and pbuffers (projective textures) though.

Share this post


Link to post
Share on other sites
Yes, it was a hardware limitation. But the reason it was a hardware limitation in the first place was that the spec didn''t allow non-power of two dimensions, so why bother making hardware that can handle them at all?

If the spec allowed non-power of two dimensions from the beginning, do you think the hardware would have all those limitations?

So in short. The source of the limitation is in the spec, not the hardware.

Share this post


Link to post
Share on other sites
> If the spec allowed non-power of two dimensions from the beginning, do you think the hardware would have all those limitations?

Yes. Specs were designed around the hardware of that time, not the other way round. This power of two limitation has been build into the OpenGL specs, *because* hardware at that time had this limitation.

It is just so much simpler to design a fast pixelfetch system that operates on power of two textures. It's not easy *at all* to do it with NPT textures, that's why hardware capable of that only starts emerging into the consumer level market. It is a 100% logical evolution of technology.

> So in short. The source of the limitation is in the spec, not the hardware.

Wrong again. If the hardware was capable of NPT textures at that time, believe me, they would have changed the 2 lines of the spec that would have prevented it's use...


Edited by - Yann L on February 27, 2002 5:32:11 PM

Share this post


Link to post
Share on other sites
Desiging a texel fetch unit that can use NPT sizes is possible, and has been possible, but certainly not as efficient as for PT sizes, as you say. If it was impossible, I can clearly see a reason not to include NPT sizes in the spec, but now that isn''t the case.

If a hardware manufacturer can''t/don''t want to make a hardware texel fetch unit with support for NPT sizes (assuming it''s in the spec), then the manufacturer is free not to do so, and let the driver do the texturing in software instead, when NPT textures are used.

Limiting a spec to what the hardware is capable of at that time is not what I consider optimal. If they want it to become better, they have to push the requirements. Sure, I can agree that removing the PT limitation 10-12 years ago might have been too much. But as I said, it''s not impossible, and since a few years not so inefficient that it''s totally unusable in practice, so why leave the option?

Guess we have different oppinions...

Share this post


Link to post
Share on other sites
> Desiging a texel fetch unit that can use NPT sizes is possible, and has been possible, but certainly not as efficient as for PT sizes, as you say. If it was impossible, I can clearly see a reason not to include NPT sizes in the spec, but now that isn't the case.

Please keep in mind, *when* the spec for OpenGL was created. People didn't even thought about NPT textures at that time.

> If a hardware manufacturer can't/don't want to make a hardware texel fetch unit with support for NPT sizes (assuming it's in the spec), then the manufacturer is free not to do so, and let the driver do the texturing in software instead, when NPT textures are used.

Texturing is an integral feature of OpenGL. If the specs would allow any texture size, then a hardware manufacturer *needs* to include that in hardware, or no texturing at all (which, again, is against the specs). Otherwise, he would be modifying / implying something that was not mentioned in the specs, and he wouldn't get certified by SGI. You can't just build in a limitation that wasn't mentioned in the specs. Even if the functionality would be there (through software mode), it is next to useless: Developpers would use any texture size, since they are allowed to do so in the specs. The manufacturer with his PT-only card would be killed on the market in no time, since his board wouldn't accelerate 99.9% of the apps/games out there...

> Limiting a spec to what the hardware is capable of at that time is not what I consider optimal.

Of course it's not optimal. But what do you consider optimal ? Including every possible feature, that may (or may not) be developped in the next 10 years ? Doesn't work that way. If you want an ever changing API that tries to do that (and fails), getting incompatible to itself at every major version update, then you should use Direct3D instead...

And I believe, the PT limit is gone in OpenGL 2.0 (though I'm not 100% sure).


Edited by - Yann L on February 28, 2002 11:27:36 AM

Share this post


Link to post
Share on other sites

  • Advertisement