Need to sample small res int2 texture for full screen pass

Started by
9 comments, last by Mercesa 6 years, 7 months ago

I am currently trying working on implementing a paper which requires me to use a downsampled texture of type int2 of resolution  (width/k, height/k). Where K is an unsigned integer k = (1, infinity)

I need to sample this smaller int2 texture in my full screen pass and effectively sample it per window pixel, but I can not use texture.Sample as it is an int2 type AND because it is smaller than the screen size I have read I am not able to use texture.Load either. 

So in short: Need to use downsampled int2 texture in fullscreen rendering pass but I don't know how to sample this texture properly.

 

Advertisement

Hi, why you can't use texture.Sample() ? I think you should be able to create a  DXGI_FORMAT_R32G32_SINT and sample it :D

7 hours ago, piluve said:

Hi, why you can't use texture.Sample() ? I think you should be able to create a  DXGI_FORMAT_R32G32_SINT and sample it

No, INT textures are not filterable. Can't you juss do a myTex[ uint(svPosition) % textureSize ] ? If you need filtering, use a FLOAT format, or Gather and apply bilinear filtering yourself

I wasn't aware you could access textures like that in a pixel shader, but it makes sense since it's possible in a compute shader. I'll try it out tomorrow and will make another post :) thanks 

21 hours ago, Mercesa said:

AND because it is smaller than the screen size I have read I am not able to use texture.Load either.

Can you explain why you cannot use Load, which would seem like the way to go ...?

.:vinterberg:.

I've read somewhere load only works if you have exactly matching coordinates which match the screen 1:1, not if you have a texture of 1/2 or 1/4th size of screen. I have attempted to use Load but I am not sure how to calculate the correct screen coordinates.

Since if I do position/ float2(screenWidth, screenHeight)  * (textureSizeX, textureSizeY) and use that as coordinates for load. It does not work. The link below stated this, though there was no reasoning stated for this, and after experimenting myself I also could not figure out how to use Load properly with a smaller texture.

https://gamedev.stackexchange.com/questions/65845/difference-between-texture-load-and-texture-sample-methods-in-directx

16 hours ago, galop1n said:

No, INT textures are not filterable. Can't you juss do a myTex[ uint(svPosition) % textureSize ] ? If you need filtering, use a FLOAT format, or Gather and apply bilinear filtering yourself

I have tried myTex[uint(svPosition) % textureSize] and I still end up with a black texture :( I am 100% sure the texture is bound to the pipeline since it shows up in my graphics debugging.

Wow ok, I think I figured it out, debug mode yesterday gave me no errors and it suddenly says this.

PSSetShaderResources: Resource being set to PS shader resource slot 6 is still bound on output! Forcing to NULL. 

Sometimes restarting visual studio performs miracles.. (also this is infuriating because why the hell would graphics debugging even show the texture being bound if this error was the case?

 

The only problem now it keeps giving me this error even though I am explicitly setting the resource to 0 before using it. 

 

edit: fixed my problem by using the last post of this thread

 

To be clear, Texture2D::Load is functionally equivalent to using the [] operator. They're just different HLSL syntax that compiles to the same exact bytecode. 

The way that they both work is that they use unnormalized coordinates to access the texture data. So for a 2D texture you'll typically pass a pair of integers where the X component is of the range [0, width - 1] and the Y component is of the range [0, height - 1] (values outside of that range will always return 0). This is different from the normalized UV coordinates that are used for Texture2D::Sample, where you pass [0.0, 1.0] floats that are mapped to the minimum and maximum extents of the texture. To convert from normalized to unnormalized coordinates you can use some simple math:


int2 unNormalized = int2(uv * float2(textureWidth, textureHeight));

Hopefully this makes it clear that you can pass arbitrary coordinates to Load or operator[], and so there's no requirement that the texture you're sampling has dimensions that exactly match the dimensions of your render target. However you may have to do a bit of math to compute the coordinates that you pass.

So what Mercasa suggested was using "myTex[uint(svPosition) % textureSize]", which is one way of mapping your texture to your pixel shader positions. What this will do here is essentially "tile" the low-resolution texture across your larger-resolution render target.  So for instance if the low-resolution texture is 1/4 the width and height of your render target, the low-resolution render target will repeat 4 times in the X direction and 4 times in the Y direction, effectively "repeating' it 16 times total. I suspect what you want is to instead load your texture such that the low-resolution texture still "covers" the same amount of screen space as the source texture and effectively covers the entire output render target. So if it was 1/4 the width and height, each texel of the low-res texture would cover a 4x4 block of pixels being shaded by your pixel shader. To do this, you'll want to divide the pixel coordinate by your "k" factor, where rtSize / textureSize == k:


myTex[uint2(svPosition / k)]

Doing this will be roughly equivalent to using Sample with normal screen-mapped UV coordinates, with point filtering applied.

You're a hero MJP :) Thanks for clearing that up!

This topic is closed to new replies.

Advertisement