I just got texture tiling with a texture atlas working, they said it was impossible?

Started by
3 comments, last by Yann L 17 years, 1 month ago
I've read several articles on texture atlases that say one of the drawbacks is that a given texture inside cannot be tiled. But I just got it working with shaders. I wasn't sure if anyone else has done the same thing because I could not find any other topics posted here that related to the subject so I'm posting my code. I created a 512x512 texture with 4 subtextures in it, each 256x256. In the vertex shader I take the incoming uv coordinate and multiply it by 0.5. This puts it in the upper left corner of the image. Then in the fragment shader you simply offset it to the correct part of the entire texture. One thing to keep in mind is that for the upper left image, the maximum x coordinate should be (1 / image width) less than 0.5 and so should the maximum y value. This prevents color bleeding because of linear interpolation. So here's the vertex shader:

varying vec2 uv_coords;

void main()
{		
	uv_coords = gl_MultiTexCoord0.st * 0.5;
	gl_Position = ftransform();	
}

And here's the fragment shader for sampling the subtexture in the upper left of the image.

uniform sampler2D texture;
varying vec2 uv_coords;
void main()
{	
	uv_coords = mod(uv_coords, 0.49903);
	uv_coords.x += 0.5;
	uv_coords.x = clamp(uv_coords.x, 0.50093, 1.0);	
	vec4 color = texture2D(texture, uv_coords);
	gl_FragColor = color;
}	

And here's the fragment shader if you wanted to sample the subtexture in the upper right.

uniform sampler2D texture;
varying vec2 uv_coords;
void main()
{	
	uv_coords = mod(uv_coords, 0.49903);
	uv_coords.x += 0.5;
	uv_coords.x = clamp(uv_coords.x, 0.50093, 1.0);
	uv_coords.y = clamp(uv_coords.y, 0.0, 0.49903);
	
	vec4 color = texture2D(texture, uv_coords);
	gl_FragColor = color;
}	

Its pretty much the same for the bottom 2 subtextures. I found that this only works with linear filtering and does show color bleeding when used with mipmaps. But I think its ideal for packing, lets say, detail textures into a single large image. That way you could apply several detail textures to say a terrain all in texture unit and leave the other units open to other texture bindings. Granted I've only run this on a single tiled textured quad so I don't know what the performance implications are. I just wanted to know if anyone else had experimented with something like this and what their results were. But in my case, I was rendering terrain in my engine with multiple passes. One pass for the base texture and lightmap then several more passes for each group of 3 detail textures and their alpha maps. But now in a single pass, I'm combining a base texture and lightmap into a single image. Then up to 8 detail textures in a single image and their assoicated alphamaps in the remaining 2 texture units. (2 x RGBA = 8 alpha maps) And all this can be done in 1 pass with 4 texture units.
Author Freeworld3Dhttp://www.freeworld3d.org
Advertisement
There's an article in ShaderX3 about how to emulate wrapping, clamping and mirroring with pixelshaders in a texture atlas. It also shows ways to include mipmaps. You might want to check it out.

I personally tried pixelshader based emulation of wrap modes in a texture atlas some time ago. I generally found the overhead of the additional perpixel math to be higher than the overhead from the texture state changes. But then again, my pixelshaders were already very large and I was fragment limited. So I ended up using altlases only for textures where UVs were guaranteed to stay in range. However, there may very well be scenarios were this technique is interesting performance wise. Especially when you're state change limited, and have otherwise short pixelshaders. YMMV.

Quote:
But I think its ideal for packing, lets say, detail textures into a single large image. That way you could apply several detail textures to say a terrain all in texture unit and leave the other units open to other texture bindings.

For this specific application, 3D textures would be far better suited. You pack a detail texture per 3D slice, and enable linear sampling only on s and t. In your shader, you can then access as many slices as you need by using only one single texture unit.
I will ditto that to. I have done this at least a year ago if not more. I have 16 textures packed into a single texture. I find as YannL said performance varies. But with texture arrays the need for atlas is not needed anymore IMO. One problem with atlases is filter other than GL_NEAREST you will have bleeding issues unless you want to doctor up your textures. I don't so textures arrays are just what I need. I myself am fragment shader limited also so the extra overhead for calculating the offsets kills my fps.
But how do you get around the mipmap issue with 3d textures? Where if you declare a 4x256x256 3d texture it gets turned into a 2x128x128 for the first mipmap and so forth.
Author Freeworld3Dhttp://www.freeworld3d.org
Quote:Original post by soconne
But how do you get around the mipmap issue with 3d textures? Where if you declare a 4x256x256 3d texture it gets turned into a 2x128x128 for the first mipmap and so forth.

You can't use mipmaps in 3D texture atlases. But in the case of detail textures, this doesn't really matter anyway. You'd need at most one or two levels of mipmapping, and the second level can be prefiltered and encoded in a second slice.
slice  texture0      grass detail level 01      grass detail level 12      stone detail level 03      stone detail level 1etc...

If you enable linear filtering on the r axis too, and carefully generate the r coordinate in your shader, you can use hardware filtering to perform the blending between the levels of each texture (which saves fragment instructions).

Of course, as Mars_999 mentioned, if your GPU supports it, check out EXT_texture_array. It essentially works like a 3D texture (you select the slice in a similar manner), but it mipmaps each slice independently. All the textures in the array still have to have the same dimensions though, which I consider a serious design flaw (although I fully understand the technical reasons behind it from a hardware implementation point of view: they simply reused the 3D texture circuitry with a modified mipmap lookup operator).

This topic is closed to new replies.

Advertisement