Quote:Original post by dario_ramos
I thought you meant I had to set those modes using SetSamplerState when initializing the device. I did that and nothing changed.
I did mean that, but it seems that particular issue I had around q=0 is no problem in your scenario and we're just stuck with the tricky one around q=0.5.
Quote:However, I looked around some more about this continuity issues and found that, aside from tex2D, there are tex2Dlod and tex2dgrad functions. When using those, it seems you can change sampler states inside the pixel shader. Is that true? I read tex2Dlod and tex2Dgrad's documentation but didn't understand how to apply them to my problem... Is that possible?
As far as I know, tex2Dlod and tex2dgrad are a lot less magical than that. If I understand them correctly, these are meant to be used as replacements for tex2D when you need to call it from within some kind of flow control or branching in your pixel shader.
Normally the sampler can judge which mip level you'd want by the (2nd?) derivate of the texture coordinates you pass to tex2D. With tex2D instructions within flow control however (for example if statements) this derivate can't be depended on since it may differ wildly from pixel to pixel, so you use these to help the sampler out in selecting a mip level.
We shouldn't need tex2D instructions within flow control though and you probably shouldn't be using any mip levels anyway if you want to use your image at maximum resolution, so these are of no use here I think. I could be way off here though, so please correct me if I'm wrong.
Quote:But I'm still getting those nasty artifacts...
Yep, same here with my little test project. I tried a number of things but the problem remains that the sampler simply returns wrong interpolations when using anything but point filtering. Crappy continuity checking introduced more artifacts than it fixed, but I tried implementing some simple linear filtering which does work a bit:
point naive filter
float Sample(float2 tex){ // Sampler is set up with point filtering! float4 color = tex2D(Sampler, tex); float q = color.r; return q < 0.5 ? q + 0.5 : q - 0.5;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{ // Change 256 to the size of your texture, // it only supports square textures atm float2 texelpos = 256 * input.Tex0; float2 lerps = frac( texelpos ); float texelSize = 1.0 / 256.0; float4 sourcevals[4]; sourcevals[0] = Sample(input.Tex0); sourcevals[1] = Sample(input.Tex0 + float2(texelSize, 0)); sourcevals[2] = Sample(input.Tex0 + float2(0, texelSize)); sourcevals[3] = Sample(input.Tex0 + float2(texelSize, texelSize)); float4 lerpval = lerp( lerp( sourcevals[0], sourcevals[1], lerps.x ), lerp( sourcevals[2], sourcevals[3], lerps.x ), lerps.y ); return float4(lerpval.r, 0, 0, 1); }
It's still a far cry from the default hardware linear filtering though. So either I messed something up, or the hardware filtering is a lot more complex and/or adaptive than my simple naive filter.
With this said and done, I still think your best bet is the 2 'pass' approach described above, where you convert from signed to unsigned in the 1st pass and let the shader do hardware linear filtering on the correct unsigned values in the 2nd. Multiframe image playback need not be a showstopper usecase, since even with this 2 pass approach any non-ancient graphics hardware should be able to render this in the order of hundreds of frames per second.