• Advertisement
Sign in to follow this  

DX11 Need to sample small res int2 texture for full screen pass

Recommended Posts

I am currently trying working on implementing a paper which requires me to use a downsampled texture of type int2 of resolution  (width/k, height/k). Where K is an unsigned integer k = (1, infinity)

I need to sample this smaller int2 texture in my full screen pass and effectively sample it per window pixel, but I can not use texture.Sample as it is an int2 type AND because it is smaller than the screen size I have read I am not able to use texture.Load either. 

So in short: Need to use downsampled int2 texture in fullscreen rendering pass but I don't know how to sample this texture properly.


Edited by Mercesa

Share this post

Link to post
Share on other sites
7 hours ago, piluve said:

Hi, why you can't use texture.Sample() ? I think you should be able to create a  DXGI_FORMAT_R32G32_SINT and sample it

No, INT textures are not filterable. Can't you juss do a myTex[ uint(svPosition) % textureSize ] ? If you need filtering, use a FLOAT format, or Gather and apply bilinear filtering yourself

Edited by galop1n

Share this post

Link to post
Share on other sites

I wasn't aware you could access textures like that in a pixel shader, but it makes sense since it's possible in a compute shader. I'll try it out tomorrow and will make another post :) thanks 

Share this post

Link to post
Share on other sites
21 hours ago, Mercesa said:

AND because it is smaller than the screen size I have read I am not able to use texture.Load either.

Can you explain why you cannot use Load, which would seem like the way to go ...?

Share this post

Link to post
Share on other sites

I've read somewhere load only works if you have exactly matching coordinates which match the screen 1:1, not if you have a texture of 1/2 or 1/4th size of screen. I have attempted to use Load but I am not sure how to calculate the correct screen coordinates.

Since if I do position/ float2(screenWidth, screenHeight)  * (textureSizeX, textureSizeY) and use that as coordinates for load. It does not work. The link below stated this, though there was no reasoning stated for this, and after experimenting myself I also could not figure out how to use Load properly with a smaller texture.


Edited by Mercesa

Share this post

Link to post
Share on other sites
16 hours ago, galop1n said:

No, INT textures are not filterable. Can't you juss do a myTex[ uint(svPosition) % textureSize ] ? If you need filtering, use a FLOAT format, or Gather and apply bilinear filtering yourself

I have tried myTex[uint(svPosition) % textureSize] and I still end up with a black texture :( I am 100% sure the texture is bound to the pipeline since it shows up in my graphics debugging.

Share this post

Link to post
Share on other sites

Wow ok, I think I figured it out, debug mode yesterday gave me no errors and it suddenly says this.

PSSetShaderResources: Resource being set to PS shader resource slot 6 is still bound on output! Forcing to NULL. 

Sometimes restarting visual studio performs miracles.. (also this is infuriating because why the hell would graphics debugging even show the texture being bound if this error was the case?


The only problem now it keeps giving me this error even though I am explicitly setting the resource to 0 before using it. 


edit: fixed my problem by using the last post of this thread


Edited by Mercesa

Share this post

Link to post
Share on other sites

To be clear, Texture2D::Load is functionally equivalent to using the [] operator. They're just different HLSL syntax that compiles to the same exact bytecode. 

The way that they both work is that they use unnormalized coordinates to access the texture data. So for a 2D texture you'll typically pass a pair of integers where the X component is of the range [0, width - 1] and the Y component is of the range [0, height - 1] (values outside of that range will always return 0). This is different from the normalized UV coordinates that are used for Texture2D::Sample, where you pass [0.0, 1.0] floats that are mapped to the minimum and maximum extents of the texture. To convert from normalized to unnormalized coordinates you can use some simple math:

int2 unNormalized = int2(uv * float2(textureWidth, textureHeight));

Hopefully this makes it clear that you can pass arbitrary coordinates to Load or operator[], and so there's no requirement that the texture you're sampling has dimensions that exactly match the dimensions of your render target. However you may have to do a bit of math to compute the coordinates that you pass.

So what Mercasa suggested was using "myTex[uint(svPosition) % textureSize]", which is one way of mapping your texture to your pixel shader positions. What this will do here is essentially "tile" the low-resolution texture across your larger-resolution render target.  So for instance if the low-resolution texture is 1/4 the width and height of your render target, the low-resolution render target will repeat 4 times in the X direction and 4 times in the Y direction, effectively "repeating' it 16 times total. I suspect what you want is to instead load your texture such that the low-resolution texture still "covers" the same amount of screen space as the source texture and effectively covers the entire output render target. So if it was 1/4 the width and height, each texel of the low-res texture would cover a 4x4 block of pixels being shaded by your pixel shader. To do this, you'll want to divide the pixel coordinate by your "k" factor, where rtSize / textureSize == k:

myTex[uint2(svPosition / k)]

Doing this will be roughly equivalent to using Sample with normal screen-mapped UV coordinates, with point filtering applied.

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Stewie.G
      I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
      I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
      Here are my passes:
      RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } }  
      D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur



      P0 + P1

      As you can see, it does not work at all.
      I think the issue is in my BlendState, but I am not sure.
      I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)
    • By Fleshbits
      Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
      Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
      Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
      I plainly don't like Unity or Unreal, but might learn them for reference.
      So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
      Is there a current and up to date list of learning resources anywhere?  I am about tired of 404s..
    • By Stewie.G
      I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far:
      float gaussian(float x, float sigma)
          float pi = 3.14159;
          float sigma_square = sigma * sigma;
          float a = 1 / sqrt(2 * pi*sigma_square);
          float b = exp(-((x*x) / (2 * sigma_square)));
          return a * b;
      My problem is that I don't quite know what sigma should be.
      It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1.
      So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur.
      Here is what my kernel looks like with sigma == 1
              [0]    0.0033238872995488885    
              [1]    0.023804742479357766    
              [2]    0.09713820127276819    
              [3]    0.22585307043511713    
              [4]    0.29920669915475656    
              [5]    0.22585307043511713    
              [6]    0.09713820127276819    
              [7]    0.023804742479357766    
              [8]    0.0033238872995488885    
      I would have liked it to be more "rounded" at the top, or a better spread instead of wasting [0], [1], [2] with values bellow 0.1.
      Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur.
      I've found this post 
      ... which helped me a bit, but I am really confused with this the part where he divide sigma by 3.
      Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect...
      Thanks :-)
    • By mc_wiggly_fingers
      Is it possible to asynchronously create a Texture2D using DirectX11?
      I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present.
      Is there anything else that I could do to prevent that frame drop from occuring?
    • By cambalinho
      i'm trying draw a circule using math:
      class coordenates { public: coordenates(float x=0, float y=0) { X = x; Y = y; } float X; float Y; }; coordenates RotationPoints(coordenates ActualPosition, double angle) { coordenates NewPosition; NewPosition.X = ActualPosition.X*sin(angle) - ActualPosition.Y*sin(angle); NewPosition.Y = ActualPosition.Y*cos(angle) + ActualPosition.X*cos(angle); return NewPosition; } but now i know that these have 1 problem, because i don't use the orign.
      even so i'm getting problems on how i can rotate the point.
      these coordinates works between -1 and 1 floating points.
      can anyone advice more for i create the circule?
  • Advertisement