DirectX11/SharpDX: How to disable mip-mapping (and scalig) for (multipass) pixel shaders to access exact bitmap pixel values?

Started by
5 comments, last by Alex-DE-74 9 years, 8 months ago
Dear DirectX11/SharpDX developer, dear community members,
first of all many thanks for this very impressive project.
Because of many troubles with WPF Shader Model (no vertex shader, no multipass, etc, etc) I've switched to use DirectX and SharpDX is only a managed wrapper of my choose.
For a first exercise I've tried to draw a simple (unscaled) image which is succeeded with DrawBitmap. The image was drawn exactly same as used bitmap. Only if I resize the window, the image also will be resized. Need to resize swap chain?
For the next I've tried to do exactly the same, but switch in a little Pixel Shader and after a day I've understand that is only possible only if I have Vertex Shader too. Maybe some enhancement can be made?
A get two possibilities: use triangle list / triangle strip OR use quad vertex shader as described here [1]

PS_IN VSQuad(uint VertexID: SV_VertexID)
{
 // ouputs a full screen quad with tex coords
 PS_IN Out;
 Out.tex = float2( (VertexID << 1) & 2, VertexID & 2 );
 Out.pos = float4( Out.tex * float2( 2.0f, -2.0f ) + float2( -1.0f, 1.0f), 0.0f, 1.0f );
 return Out;
}
and setup the bitmap as texture in pixel shader resource.
Test pixel shader looks like:

Texture2D picture;
SamplerState pictureSampler;

float4 PS( PS_IN input ) : SV_Target
{
  // Simple invert each color channel value
  float4 color = picture.Sample(pictureSampler, input.tex);
  float4 ret;
  ret.xyz=1-color.xyz;
  ret.a=1.0;
  return ret;
}
But for now I've two problems:
the first one the texture is (of course) not drawn in original size but in window size and resizes with window. I hope, I only need to update swap chain size to prevent resizing. But how to generate a quad of correctly size whis is exactly of my bitmap (texture size)?
The next problem: all interpolation between pixels must be prevented in pixel shader because of exactly pixel values (and neighbor pixels) must be accessed. Accordingly to MS the Mip-Mapping must be disabled, but there are not "NONE" (DISABLED) value of Filter enumeration. So it looks to be impossible to get exactly pixel values or I've missed something?
What I try to do in the end:
this should be two pass pixel shader:
1st stage zoom (with own algorithm) (in or out) a particular 2D-region of smaller/bigger image
2nd stage does some kernel filtering of zoomed region
P.S. The images to rasterize will be continuosly (and at high frame rate) updated from an observation camera.
Here [2] is a little bit more explanation, which I try to achieve.
This sounds very easy, but I've don't found an example (neither SharpDX nor WPF) how to do this correctly, mainly because all examples does only filter and not rescale an input image and does not need an access to exactly pixel values. My problem is, there are some artifacts, if not exactly pixel values will be used by zoom algorithms.
It will be really very helpful for me, if somebody can explain, how to do this "simple" thing or enhance SharpDX Sample library.
Many, many thanks in advice.
Alexander Landa
Advertisement
If you do not want any mipmaps just create your textures with mipmaps set to 1

Exactly - I don't want any mipmapping, any interpolation, no other "magic" transformation.

What I want is access to my original pixel (and surrounding) pixels inside my pixel shader(s).

Hereis what I doing in my C# code (with SharpDX):


            texture = Texture2D.FromFile<Texture2D>(Device, "sharpdx.png");
            textureView = new ShaderResourceView(Device, texture);
            sampler = new SamplerState(Device, new SamplerStateDescription()
           {
               Filter = Filter.MinMagMipLinear,
               AddressU = TextureAddressMode.Wrap,
               AddressV = TextureAddressMode.Wrap,
               AddressW = TextureAddressMode.Wrap,
               BorderColor = Color.Black,
               ComparisonFunction = Comparison.Never,
               MaximumAnisotropy = 16,
               MipLodBias = 0,
               MinimumLod = 0,
               MaximumLod = 16,         
           });


            // Compile Vertex and Pixel shaders
            vertexShaderByteCode = ShaderBytecode.CompileFromFile("ColorInvert.fx", "VSQuad", "vs_4_0", ShaderFlags.None, EffectFlags.None);
            vertexShader = new VertexShader(Device, vertexShaderByteCode);
            
            pixelShaderByteCode = ShaderBytecode.CompileFromFile("ColorInvert.fx", "PS", "ps_4_0");
            pixelShader = new PixelShader(Device, pixelShaderByteCode);


            // Prepare All the stages
            Context.InputAssembler.InputLayout = null;
            Context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
            Context.VertexShader.Set(vertexShader);
            


            Context.Rasterizer.SetViewport(new Viewport(0, 0, 800, 600, 0.0f, 1.0f)); // This seems does not change anything, can be removed
            
            Context.PixelShader.Set(pixelShader);
            Context.PixelShader.SetSampler(0, sampler);
            Context.PixelShader.SetShaderResource(0, textureView);
            Context.OutputMerger.SetTargets(BackBufferView);

Where to set mipmaps to 1?

And how to address a not interpolated value in pixel shader?

Many thanks,

Alex

If you just want to load pixels directly, with no sampling, no interpolation, no mipmapping, etc then instead of this:

float4 color = picture.Sample(pictureSampler, input.tex);

Use this:

float4 color = picture.Load(input.tex);

Note that for this to work, input.tex should be an int3 instead of a float2, and be in the range 0...texturesize, not 0...1.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

mhagain, many thanks! I will try to use this route.

Now I must see, how to generate tex-coordinates in other type domain.

By the way, linked MS page say it's int2 for Texture2D (which I use) and int3 for Texture2DArray. My texture is simple 2D texture, so why and how to use int3 coordinates?

And it's only supported by HLSL Shader Model 4, which is now usable within SharpDX but not WPF, also moving froWPF to SharpDX seems to be the right direction :)

By the way, linked MS page say it's int2 for Texture2D (which I use) and int3 for Texture2DArray. My texture is simple 2D texture, so why and how to use int3 coordinates?

You're right; my mistake.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

By the way, linked MS page say it's int2 for Texture2D (which I use) and int3 for Texture2DArray. My texture is simple 2D texture, so why and how to use int3 coordinates?

You're right; my mistake.

You're correct, it's int3 for Location and Int2 for offset, I've overlooked this. But maybe I can use int2 Offset paramter as X/Y texture offset. So in the sum it maybe equivalent.

This topic is closed to new replies.

Advertisement