[HLSL] Texture tiling using frac is causing incorrect uv coordinates at the edges

Started by
12 comments, last by froop 12 years, 11 months ago
If I draw a quadrangle to the screen then I've noticed that the uv coordinates wrap incorrectly and the texture "leaks" through to the opposite side as shown in the picture below:

TextureAtlasError.png


The culprit is the following line in my HLSL shader:



input.TextureCoordinates0.xy = frac(input.TextureCoordinates0.xy * input.NumTiles.xy);
input.TextureCoordinates1.xy = frac(input.TextureCoordinates1.xy * input.NumTiles.zw);


input.NumTiles is just (1, 1, 1, 1) for this quadrangle (no tiling)

The texture is drawn with the following sampler states:



sampler TextureSampler = sampler_state
{
Texture = <AtlasTexture>;

MinFilter = Point;
MagFilter = Point;
MipFilter = Point;

AddressU = Clamp;
AddressV = Clamp;
};


The texture comes from a texture atlas whose sprites have a gutter region of 2 pixels around them but this shouldn't be of too much concern given that I am avoiding linear filtering.

How can I prevent the texture coordinates from incorrectly wrapping when using frac?
Advertisement
I also have a similar problem. I would love to hear a fix for this! :)
Has anyone got any advice for this problem? I have come up with nothing after a week of searching for a solution to the frac problem.
I've tried using an offset with pixels for a mipmap chain:



// [MipMap Level]
// • Use original texture coordinates for calculation
// • Requires minimum of vs_3_0
// • mipMapLevel = log2(max(ddx(textureCoord.x) * textureWidth, ddy(textureCoord.y) * texureHeight));
float mipMapLevel0 = log2(max(ddx(input.OriginalCoordinates.x) * AtlasDim.x, ddy(input.OriginalCoordinates.y) * AtlasDim.y));
//float mipMapLevel0 = log2(max(ddx(input.OriginalCoordinates.x) * input.SourceRectangle0.z, ddy(input.OriginalCoordinates.y) * input.SourceRectangle0.w));

// Use a pixel offset with powers of 2 as each mipMap level increases by a power of 2
float onePixel = HalfPixel + HalfPixel;
input.TextureCoordinates0.xy += onePixel * pow(2, mipMapLevel0);
input.TextureCoordinates1.xy += onePixel * pow(2, mipMapLevel0);


I've also tried turning off MIP filtering:



sampler TextureSampler = sampler_state
{
Texture = <AtlasTexture>;

MinFilter = Point;
MagFilter = Point;
MipFilter = None;

AddressU = Clamp;
AddressV = Clamp;
};


Neither of which work. Any further advice would be most welcome.
try this


input.TextureCoordinates0.xy = frac(input.TextureCoordinates0.xy * input.NumTiles.xy);
input.TextureCoordinates1.xy = frac(input.TextureCoordinates1.xy * input.NumTiles.zw);

input.TextureCoordinates0.xy *= (atlasSizeInPixel - 1) / atlasSizeInPixel;
input.TextureCoordinates1.xy *= (atlasSizeInPixel - 1) / atlasSizeInPixel;
input.TextureCoordinates0.xy += 0.5f / atlasSizeInPixel;
input.TextureCoordinates1.xy += 0.5f / atlasSizeInPixel;


no guarantees

try this


input.TextureCoordinates0.xy = frac(input.TextureCoordinates0.xy * input.NumTiles.xy);
input.TextureCoordinates1.xy = frac(input.TextureCoordinates1.xy * input.NumTiles.zw);

input.TextureCoordinates0.xy *= (atlasSizeInPixel - 1) / atlasSizeInPixel;
input.TextureCoordinates1.xy *= (atlasSizeInPixel - 1) / atlasSizeInPixel;
input.TextureCoordinates0.xy += 0.5f / atlasSizeInPixel;
input.TextureCoordinates1.xy += 0.5f / atlasSizeInPixel;


no guarantees



I've tried that, along with:



input.TextureCoordinates0.x *= (AtlasDim.x - 1) / AtlasDim.x;
input.TextureCoordinates0.y *= (AtlasDim.y - 1) / AtlasDim.y;
input.TextureCoordinates1.x *= (AtlasDim.x - 1) / AtlasDim.x;
input.TextureCoordinates1.y *= (AtlasDim.y - 1) / AtlasDim.y;
input.TextureCoordinates0.x += 0.5f / AtlasDim.x;
input.TextureCoordinates0.y += 0.5f / AtlasDim.y;
input.TextureCoordinates1.x += 0.5f / AtlasDim.x;
input.TextureCoordinates1.y += 0.5f / AtlasDim.y;


but it still doesn't remove the bleeding.

I don't understand how/why this could be happening if I have a big pixel gutter around each sprite and I'm not using linear filtering or mip filtering. I thought it could be due to floating point errors but I can't debug the pixel shader in PIX as it won't give me that option after debugging a pixel.



float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
// [Scroll Texture]
float4 offset = input.ScrollVector * Time;
input.TextureCoordinates0.x -= offset.x;
input.TextureCoordinates1.x -= offset.z;
input.TextureCoordinates0.y += offset.y;
input.TextureCoordinates1.y += offset.w;

// Wrap coordinates in [0, 1) range (for both tiling and scrolling)
// Use 'frac(texCoord) * n' to wrap in [0, n) range
input.TextureCoordinates0.xy = frac(input.TextureCoordinates0.xy);
input.TextureCoordinates1.xy = frac(input.TextureCoordinates1.xy);

// [MipMap Level]
// • Use original texture coordinates for calculation
// • Requires minimum of vs_3_0
// • mipMapLevel = log2(max(ddx(textureCoord.x) * textureWidth, ddy(textureCoord.y) * texureHeight));
//float mipMapLevel0 = log2(max(ddx(input.OriginalCoordinates.x) * AtlasDim.x, ddy(input.OriginalCoordinates.y) * AtlasDim.y));
//float mipMapLevel0 = log2(max(ddx(input.OriginalCoordinates.x) * input.SourceRectangle0.z, ddy(input.OriginalCoordinates.y) * input.SourceRectangle0.w));

// Use a pixel offset with powers of 2 as each mipMap level increases by a power of 2
//float onePixel = HalfPixel + HalfPixel;
//input.TextureCoordinates0.xy += onePixel * pow(2, mipMapLevel0);
//input.TextureCoordinates1.xy += onePixel * pow(2, mipMapLevel0);

// Adjust uv coordinates so they use the correct texture from the texture atlas
input.TextureCoordinates0.xy = CalculateAtlasUV(input.TextureCoordinates0.xy, input.SourceRectangle0);
input.TextureCoordinates1.xy = CalculateAtlasUV(input.TextureCoordinates1.xy, input.SourceRectangle1);

float4 colour0 = tex2D(TextureSampler, input.TextureCoordinates0);
float4 colour1 = tex2D(TextureSampler, input.TextureCoordinates1);

// [Linear Interpolation]
// • Based on colour1 alpha (therefore colour1 takes precedence over colour0)
// • output = lerp(A, B, C);
// • output = A * (1 - C) + B * C;
float4 colour = lerp(colour0, colour1, colour1.a);
colour.a *= input.TextureCoordinates0.w;

return colour;
}
Hooray, I've solved the problem using texture coordinates that are clamped to the texel range of the texture. Toaster should be pleased :)


private static readonly Vector2[] textureCoordinates = new Vector2[]
{
new Vector2(0.5f / width, 1f - (0.5f / height)),
new Vector2(0.5f / width, 0.5f / height),
new Vector2(1f - (0.5f / width), 1f - (0.5f / height)),
new Vector2(1f - (0.5f / width), 0.5f / height),
};


Is there any way of applying this in the HLSL shader code rather than recalculating the texture coordinates for each vertex in the C# code?

Is there any way of applying this in the HLSL shader code rather than recalculating the texture coordinates for each vertex in the C# code?


Applying the code I posted (without the frac part) in the vertex shader should work.
You've misunderstood my post, the problem has been solved. The frac method is required to tile the texture.

I'm currently using a scaler in the HLSL code to scale between [0.5f / dim, 1 - 0.5f / dim] but if there's a more efficient way then I would like to know.

You've misunderstood my post, the problem has been solved. The frac method is required to tile the texture.


No, i understood :) It should solve the new "problem" nontheless if you go through the math. If not, I'll officially retire (from this thread :))

This topic is closed to new replies.

Advertisement