When texture sizes aren't powers of 2

Started by
4 comments, last by PigVomit 22 years, 9 months ago
Here's a question that's probably been asked a million times, but it's got me completely baffled at the moment: How do you deal with texture sizes (height/width) that are not in powers of two for Direct X? I'm trying to load textures from Half Life WAD3 files that have sizes like 160x128 and 48x48 onto DX8 texture surfaces that must be powers of two in size, e.g. 256x256 or 256x128, and thought that I could tile them in such a way and/or chop my polygons to a power of two and adjust their texture coordinates so that all would be well. But this seems extremely complicated and I'm not even sure if it will work for every address wrapping situation. So this has led me to wonder, do engines like Half-life possibly resize these textures to powers of two when they're loading them in-game? Or, are they using clever texture addressing tricks like I mentioned above? If they're resizing them during load, wouldn't this slow things down too much? And would resizing a 160x128 texture, to say a 256x128 or 256x256 surface size, noticably change the orignal look of the image when texture coordinates are mapped on to it during rendering? Arghh, and what kind of resizing method would they be using? Bi-linear? Bi-cubic? Nearest point? If anybody has pondered this question before or knows a solution, I'd really like to hear your insight before I head down the wrong road. Thanks, PigVomit Edited by - PigVomit on June 30, 2001 1:57:55 PM Edited by - PigVomit on June 30, 2001 2:00:21 PM
Advertisement
If I''m not mistaken, the texture is loaded into a texture with dimensions of the nearest power-of-two. If you stretch the texture, the slowdown will be insignificant because you''ll stretch them at loading time which isn''t quite so time-sensitive. You can use a function such as the OpenGL Utility Library''s gluScaleImage to scale the image before creating a texture out of it to make it take up the full power-of-two texture; usually such functions use the highest quality resampling possible. The problem with this is that since the aspect ratio of the image changes when mapping it to a rectangle of a different aspect ratio, the texture coordinates must be changed to correct this, otherwise there will be noticeable distortion. This may be acceptable, but probably isn''t. So, your best bet is the first option. =)
Thanks!

Ok, so if I understand correctly, a function like gluScaleImage would try to maintain the aspect ratio of the original texture size and this would invalidate the texel coordinates of the polygons? This makes sense. Therefore, is the better option to resize the width/height to the nearest power of two by writing my own resizing routine (or find one already written) that does not attempt to maintain aspect ratio? Doing it this way, it seems that texture coordinates would map to what they are in the original image, although the texel color might be slightly different due to the interpolation method used when it''s resized. This was my underlying concern. But I''m sure it has a negligable effect on the way the texture was designed to look and probably doesn''t appear any different when rendered.
No, gluScaleImage will just stretch the image for you. Let''s say your image is 12x14 and you''ve loaded it into a block of memory. You set up another block of memory to hold a 16x16 image (the closest power-of-two rectangle) and have gluScaleImage stretch (the aspect ratio is not preserved) the 12x14 image to 16x16 and put that result in the block of memory you set up for the 16x16 image. Then create the actual texture from that 16x16 object. But you have to adjust the texture coordinates manually.
You probably won''t need to change the texture coordinates. usually when you have an odd-shaped texture, it''s because you''re drawing an odd-shaped polygon, so if you have a 75x128 texture, say, more than likely you''re mapping it onto a polygon with a similar shape. Even though you stretch the texture to 128x128, when you map it onto the polygon, it''ll be stretched back into it''s original aspect ratio. You might notice the artifacts if you''re really careful, but usually you won''t. In my game, every time the player starts a new level, I take a screenshot of the current one running, scale it into a 512x512 texture (so in 640x480, the with is shrunk but the height is stretched) and you can really only notice the scaling where there''s sharp lines.


War Worlds - A 3D Real-Time Strategy game in development.
Yes, that''s what I was thinking too, because the texture coordinates are still going to be between 0.0 and 1.0 regardless of the height/width of the texture. When they are calculated for a polygon, the texture would be it''s normal size, say 160x128. So the upper right and lower left corners of a polygon face would be mapped to the upper right and lower left corners of the texture. Stretching the texture wouldn''t change anything because pixel to texel mapping is still going to be interpolated between 0.0 and 1.0. In other words, the texture coordinates would scale to match the stretched dimensions of the texture because of how texel values are interpolated between two coordinates. However, trying to maintain an aspect ratio would probably mess things up. But I''m going to have to try it to see what happens. I''m still curious to know what the kids at Valve are doing and if they''re scaling at load time, and what sort of resizing algorithm they use. Thanks!

This topic is closed to new replies.

Advertisement