Signed image on a texture

Started by
22 comments, last by dario_ramos 14 years, 1 month ago
Quote:Original post by remigius

Alternatively you could use q = (q + 0.5) % 1; which forgoes the conditional and produces the same result. Both methods seem to have some continuity artifacts at q = 0 and q = 0.5 when using linear filtering, but switching to point filtering works out well.

I hope this is somehow remotely relevant to your problem [smile]


Yes it is! In my shader, I use the conditional formula and it looks almost right, save for some "continuity artifacts". I tried using point filtering and they were gone, but the resulting quality is too low, especially when I zoom in. Check it out:

With linear filtering:
Using linear filtering

With point filtering:
Using point filtering

I asked the higher-ups about using point filtering, but they said the quality was "unacceptable". I tried other filters (anisotropic, gaussian, pyramidal), but it seems they have the same continuity problems.

One idea a partner suggested was mapping pixels "near" 0 and 1 to 0.5 (using an "epsilon" value to define that range). Other possibilities are doing the filtering ourselves (I find it hard to best Direct3D...). What do you guys think?
Advertisement

For the issue at q = 0, setting your samplers addressing modes (AddressU, AddressV) to clamp does the trick. Values 'near' 0 (q = 0.5) are trickier, since you can't really tell when they happen with some epsilon. These continuity things go wrong in the sampler, so when the sample reaches your shader it'll already have been interpolated to the wrong value. You could check for discontinuities by sampling neighboring pixels, but then you might as well jump straight to implementing linear filtering.

Alternatively you could render the signed 16 bit image to an unsigned rendertarget of the exact same size with point filtering. Next you render this result to your target surface with linear filtering, giving you hardware interpolation over the correct values. It's a detour, but it's correct and should still be quite fast. Whether it's feasible for your usecase with a larger number of quads however, I can't say.
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Quote:Original post by remigius

For the issue at q = 0, setting your samplers addressing modes (AddressU, AddressV) to clamp does the trick. Values 'near' 0 (q = 0.5) are trickier, since you can't really tell when they happen with some epsilon. These continuity things go wrong in the sampler, so when the sample reaches your shader it'll already have been interpolated to the wrong value. You could check for discontinuities by sampling neighboring pixels, but then you might as well jump straight to implementing linear filtering.


I thought you meant I had to set those modes using SetSamplerState when initializing the device. I did that and nothing changed. However, I looked around some more about this continuity issues and found that, aside from tex2D, there are tex2Dlod and tex2dgrad functions. When using those, it seems you can change sampler states inside the pixel shader. Is that true? I read tex2Dlod and tex2Dgrad's documentation but didn't understand how to apply them to my problem... Is that possible?

Quote:Original post by remigius
Alternatively you could render the signed 16 bit image to an unsigned rendertarget of the exact same size with point filtering. Next you render this result to your target surface with linear filtering, giving you hardware interpolation over the correct values. It's a detour, but it's correct and should still be quite fast. Whether it's feasible for your usecase with a larger number of quads however, I can't say.


I asked my partners and they said this approach would be OK if all our images were single frame, but we also do multiframe image playback... So I'm more interested in trying a solution using tex2Dlod or tex2Dgrad, but I'll need some guidance :P
I just tried this instead of tex2D:

float4 color = tex2Dgrad( Texture0, input, ddx(input), ddy(input) );


But I'm still getting those nasty artifacts...

Edit: This doesn't work either:

float4 color = tex2Dlod( Texture0, float4(input, 0.0, 1.0) );

Quote:Original post by dario_ramos
I thought you meant I had to set those modes using SetSamplerState when initializing the device. I did that and nothing changed.


I did mean that, but it seems that particular issue I had around q=0 is no problem in your scenario and we're just stuck with the tricky one around q=0.5.

Quote:However, I looked around some more about this continuity issues and found that, aside from tex2D, there are tex2Dlod and tex2dgrad functions. When using those, it seems you can change sampler states inside the pixel shader. Is that true? I read tex2Dlod and tex2Dgrad's documentation but didn't understand how to apply them to my problem... Is that possible?


As far as I know, tex2Dlod and tex2dgrad are a lot less magical than that. If I understand them correctly, these are meant to be used as replacements for tex2D when you need to call it from within some kind of flow control or branching in your pixel shader.

Normally the sampler can judge which mip level you'd want by the (2nd?) derivate of the texture coordinates you pass to tex2D. With tex2D instructions within flow control however (for example if statements) this derivate can't be depended on since it may differ wildly from pixel to pixel, so you use these to help the sampler out in selecting a mip level.

We shouldn't need tex2D instructions within flow control though and you probably shouldn't be using any mip levels anyway if you want to use your image at maximum resolution, so these are of no use here I think. I could be way off here though, so please correct me if I'm wrong.

Quote:But I'm still getting those nasty artifacts...


Yep, same here with my little test project. I tried a number of things but the problem remains that the sampler simply returns wrong interpolations when using anything but point filtering. Crappy continuity checking introduced more artifacts than it fixed, but I tried implementing some simple linear filtering which does work a bit:


    point         naive filter


float Sample(float2 tex){    // Sampler is set up with point filtering!    float4 color = tex2D(Sampler, tex);    float q = color.r;        return q < 0.5 ?  q + 0.5 : q - 0.5;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{	// Change 256 to the size of your texture, 	// it only supports square textures atm	float2 texelpos = 256 * input.Tex0;	float2 lerps = frac( texelpos );	float texelSize = 1.0 / 256.0;	            	float4 sourcevals[4];	sourcevals[0] = Sample(input.Tex0);	sourcevals[1] = Sample(input.Tex0 + float2(texelSize, 0));	sourcevals[2] = Sample(input.Tex0 + float2(0, texelSize));	sourcevals[3] = Sample(input.Tex0 + float2(texelSize, texelSize));  				float4 lerpval = lerp( lerp( sourcevals[0], sourcevals[1], lerps.x ),		lerp( sourcevals[2], sourcevals[3], lerps.x ),		lerps.y );							  	return float4(lerpval.r, 0, 0, 1);	}


It's still a far cry from the default hardware linear filtering though. So either I messed something up, or the hardware filtering is a lot more complex and/or adaptive than my simple naive filter.

With this said and done, I still think your best bet is the 2 'pass' approach described above, where you convert from signed to unsigned in the 1st pass and let the shader do hardware linear filtering on the correct unsigned values in the 2nd. Multiframe image playback need not be a showstopper usecase, since even with this 2 pass approach any non-ancient graphics hardware should be able to render this in the order of hundreds of frames per second.
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
This topic is haunting me [smile] I realized over coffee this morning that we've been sitting on the simple answer all along:

Quote:Original post by remigius
... the problem remains that the sampler simply returns wrong interpolations when using anything but point filtering ...


By setting up 2 samplers, one with point filtering and one with linear filtering, you can just take a point sample to determine if the remapped value is in a problem area (q=0.5). If so you can use the point sample there and if not, just use the linear sample.

float SamplePoint(float2 tex){    // Sampler has point filtering    float4 color = tex2D(Sampler, tex);    float q = color.r;        return q < 0.5 ?  q + 0.5 : q - 0.5;}float SampleLinear(float2 tex){    // Sampler2 has linear filtering, same texture as Sampler    float4 color = tex2D(Sampler2, tex);    float q = color.r;        return q < 0.5 ?  q + 0.5 : q - 0.5;}float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0{	float pointSample = SamplePoint(input.Tex0);	float linearSample = SampleLinear(input.Tex0);	float epsilon = 0.15;			// use the point sample around q=0.5, so texture value around 0, linear otherwise	float sample = ( abs(pointSample - 0.5) < epsilon ) ? pointSample : linearSample;		return float4(sample, 0, 0, 1);}


Sorry about the detour, I hope this is still of some use.


[Edited by - remigius on February 11, 2010 1:40:21 AM]
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Sneaky :D

Niko Suni

Quote:Original post by remigius

By setting up 2 samplers, one with point filtering and one with linear filtering, you can just take a point sample to determine if the remapped value is in a problem area (q=0.5). If so you can use the point sample there and if not, just use the linear sample.



Groovy! My only problem is that I don't know how to set up another sampler and use it in a pixel shader; I thought the device could use only one sampler (the one set on initialization)

I don't have any experience using the default samplers from my pixel shader, I just set them up in my effects file. Here are the samplers I defined in my test project. I'm not sure it's good form to use the same Texture in 2 samplers, but it seems to work. This goes into the top part of your effects (shader) file:

texture Texture;sampler Sampler = sampler_state{    Texture = (Texture);    MinFilter = Point;    MagFilter = Point;    MipFilter = Point;    AddressU = Clamp;	    AddressV = Clamp;};sampler Sampler2 = sampler_state{    Texture = (Texture);    MinFilter = Linear;    MagFilter = Linear;    MipFilter = Linear;    AddressU = Clamp;	    AddressV = Clamp;};


It's probably the best idea to set the texture directly onto the effect, instead of mucking about with registers to set it through the device. I'm not sure how this works in unmanaged D3D, but ID3DXBaseEffect::SetTexture sounds like a good bet [smile]
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Quote:Original post by remigius

It's probably the best idea to set the texture directly onto the effect, instead of mucking about with registers to set it through the device. I'm not sure how this works in unmanaged D3D, but ID3DXBaseEffect::SetTexture sounds like a good bet [smile]


First of all, thanks for the answers. I'm learning a lot from this.
Anyhow, it seems you do things another way. We don't use effects; we use .psh files together with D3DXCompileShaderFromFile and IDirect3DDevice9::CreatePixelShader. By the way, I never used effects; just heard a partner mention them once :P
So, I tried adding the sampler declarations in my .psh file; however, it seems that I must also change the sampler being used by the device using SetSamplerState, because even if I do this...

float4 main( float2 input: TEXCOORD0 ) : COLOR0 {   float4 pointSample = tex2D( pointSampler, input );   float4 linearSample = tex2D( linearSampler, input );   float epsilon = 0.1;   // For values close to 0.5, use point filtering sample (for other cases, use linear filtering sample).   // This is because linear filtering has continuity issues around 0.5   float4 color = pointSample; /*( abs(pointSample.r - 0.5) < epsilon ) ? pointSample : linearSample;*/   if (pixelsAreSigned){      color.r = adjustForSignedFormat( color.r );      color.g = adjustForSignedFormat( color.g );      color.b = adjustForSignedFormat( color.b );   }   color = adjustWindowLevel( color, window, level );   if( invert ) color = colorInversion( color );   color = adjustGamma( color, gamma );   color.w = 0;   return color;}


... that is, using the point filtering sample ALWAYS, I still see the same as when I use the linear one. And my guess is that it's because the D3D device still has liner filtering set, and prevails over the shader's.
Why did I do this? Because if I uncomment the logic which chooses the linear or point sample it stills looks bad:

Cuca

No clue what's going on in this case...

This topic is closed to new replies.

Advertisement