Bluring a Texture Atlas

Started by
6 comments, last by Zipster 17 years ago
Hey, I have a texture atlas consisting of four maps with the same size. I want to blur this map, but I can not just run a filter on it. I also do not want to blur all four maps separately. I was thinking about using the border of each map as a border for the filter ... anyone tried this? Any good ideas on how to do this? - Wolfgang
Advertisement
In case of gaussian blur with ps,you can try to bild appropriate matrix & modify texcoords in vs -in order to project each texture in [0,1] space,
avoid border comparisons in ps,and use tex2Dproj & clamp/border sampler state combinations.
In case of ping-pong via stretchrect -may be work with textures(or
texture fragments) separately will be faster (if you make
TA in realtime)?

[Edited by - Krokhin on April 6, 2007 10:41:06 AM]
I seem to end up by clamping the unnormalized texture coordinates. So if the map is a 1024 map I have four 512 maps in there and therefore I clamp four texture coordinates.
This does not look very elegant and maybe I can come up with a better idea next week.
If I understand your question correctly, you want the effect of an independent blur of each map, but by only running the physical atlas through one filter shader? The easiest way to go about this would probably be to just use a bunch of conditionals on the border checks and take advantage of their spatial coherence. Only fragment blocks that cross border boundaries will need to evaluate more than one branch.
Hey Zipster, thanks a lot for coming to my rescue again.
This is what I am doing now. I was wondering if there is a better way.
You can actually be quite clever with your camera setup and quad rendering to eliminate a bunch of the conditionals. By giving the camera the same width and height as the atlas texture, and rendering each map as a quad with the appropriate vertex positions and texture coordinates, you don't have to check what quadrant you're in and a lot of the texture coordinate calculations will be done already.

For example, let's say your atlas is 1024x1024. You set your camera width and height to [1024,1024]. You let the vertex positions of the quads be the unnormalized texture coordinates. Since the camera is the same size as the atlas, all four quads will fill the screen. However, you let the texture coordinates of the quads all be from [0,0]-[512,512]. Essentially what you're doing is letting the vertex positions act as "global" coordinates, and the texture coordinates act as "local" coordinates. As you render each quad, you read values from the atlas texture using the pixel position ("global" coordinates) as the lookup. The texture coordinates ("local" coordinates) tell you which pixels are at the border of the map. As far as convolution kernels are concerned, I'm pretty sure that anything past the edge of the data is considered 0 and you shouldn't be clamping the texture coordinates, since that just "extends" the edge pixels.
Thanks for the advice. It is just a simple blur filter. One of my target platforms would not like the fact that I render four times with a different camera setting, so I assume the following code is the best I can do. btw; moving the stuff into the vertex shader did not work out. As soon as I do this I guess the interpolation screws up the values.

   float2 TexC =  ((vPos.x < NormalizedTextureSize) && (vPos.y < NormalizedTextureSize)) ?                    clamp(vPos, float2(0.0, 0.0f), float2(NormalizedTextureSize,NormalizedTextureSize)) : // / AtlasSize : // upper left                  ((vPos.x < NormalizedAtlasSize) && (vPos.y < NormalizedTextureSize)) ?                      clamp(vPos, float2(NormalizedTextureSize,        0.0f), float2(NormalizedAtlasSize,  NormalizedTextureSize)): // / AtlasSize : // upper right                  ((vPos.x < NormalizedTextureSize) && (vPos.y < NormalizedAtlasSize)) ?                      clamp(vPos, float2(0.0f, NormalizedTextureSize), float2(NormalizedTextureSize, NormalizedAtlasSize)): // / AtlasSize  : // lower right                  ((vPos.x < NormalizedAtlasSize) && (vPos.y < NormalizedAtlasSize)) ?                        clamp(vPos, float2(NormalizedTextureSize, NormalizedTextureSize), float2(NormalizedAtlasSize, NormalizedAtlasSize)): // / AtlasSize  : 0.0f; // lower left                  0.0f;                     float4 Color;   float2 TexelSize = (1.0f / 512.0f, 1.0f / 512.0f);         Color  = tex2D(Image, TexC + float2(-1.0f * TexelSize.x, -1.0f * TexelSize.y));   Color += tex2D(Image, TexC + float2( 0.0f,               -1.0f * TexelSize.y));         Color += tex2D(Image, TexC + float2( 1.0f * TexelSize.x, -1.0f * TexelSize.y));   Color += tex2D(Image, TexC + float2(-1.0f * TexelSize.x,  0.0f));         Color += tex2D(Image, TexC + float2( 0.0f,                0.0f));     Color += tex2D(Image, TexC + float2( 1.0f * TexelSize.x,  0.0f));           Color += tex2D(Image, TexC + float2(-1.0f * TexelSize.x,  1.0f * TexelSize.y));   Color += tex2D(Image, TexC + float2( 0.0f,                1.0f * TexelSize.y));             Color += tex2D(Image, TexC + float2( 1.0f * TexelSize.x,  1.0f * TexelSize.y));     
You don't need to change the camera for each map, just for each atlas size. So no matter how many atlases you have, if they're all the same size you use the same camera. I had the following shader code (Cg) in mind:
// Returns 1 if x is in the unit box, returns 0 otherwisefloat unitBBox2D(in float2 x){   return step(0.0f, x.x) * step(x.x, 1.0f) * step(0.0f, x.y) * step(x.y, 1.0f);} // Main shaderfloat4 main(in float2 position : POSITION,            in float2 coords : TEXCOORD0,		   	   uniform float2 offsets[9],            uniform samplerRECT Image) : COLOR{   float4 Color = (0.0f,0.0f,0.0f,0.0f);   float2 GlobalTexelSize = (1.0f / 1024.0f, 1.0f / 1024.0f);   float2 LocalTexelSize = (1.0f / 512.0f, 1.0f / 512.0f);   float ValidTexels = 0.0f;    for(int i = 0; i < 9; ++i)   {      float ValidFlag = unitBBox2D(coords + offset * LocalTexelSize);      Color += tex2D(Image, position + offsets * GlobalTexelSize) * ValidFlag;      ValidTexels += ValidFlag;    }    return Color / ValidTexels;}

Hopefully that clears up my explanation from before. If you render each map as a quad, then set the vertex positions to be the texture lookup locations. This frees up the actual texture coordinates to be used as a way for the filter to detect when the kernel is reading outside the map boundaries for each quad. The function unitBBox2D will return 0, which eliminates the value (it relies on the step function in Cg - HLSL might have a similar equivalent). The variable ValidTexels essentially keeps track of the number of texels in-bounds, so that if you're reading the upper-left corner of a map for instance, it sums up the four upper-left texels and divides by 4.0f, rather than by a constant 9.0f.

I know this sounds a bit confusing, so let me know if you have any questions.

This topic is closed to new replies.

Advertisement