Jump to content
  • Advertisement
Sign in to follow this  
auto.magician

Copying from 2DTexture to 1DTexture

This topic is 2582 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hiya,

I've had an idea that may not work, well it doesn't work, but I was wondering if anyone has come across this...




Inside a pixel shader ( vs4.0 upwards)
My problem comes with copying data from a 2DTexture into a 1DTexture. It makes sense that the destination texture size defines the size of what will be copied from the source. But, is there a way that I can copy each line of the source 2d texture into the same 1 line of the destination 1d texture, maybe with an add or overwrite, without using a loop? In a similar way that you would normally use the texture.Sample function.
I've tried various methods using 2d source and 2d destination textures, but there doesn't seem a straight forward way to tell the shader what uv line to sample to, only where to sample from.



Many thanks

Dave.

Share this post


Link to post
Share on other sites
Advertisement
Well, where to sample from on the source is determine by the texture coordinates you pass into your shader,as you know. Where to write to on the target is just determined by the geometry that you're drawing. Just draw a quad that covers the single line you want to draw to in your target (that would be the "entire screen" if your target is a 1d texture in your scenario). Of course, you need to adjust the texture coordinates so that you're sampling from only one particular line in your source texture.

For instance, say you want to copy the 2nd line of a 100 x 4 texture to a 100 x 1 target. The texture coordinates you would use would be (0, 0.375) for the two vertices on the "left" of the quad and (1, 0.375) for the two vertices on the "right".

0.375 comes from 0.25 (the start of the 2nd line relative to its position between 0 and 1) plus 0.125 (the height of half a line) to ensure we're sampling from the "center" of the line.




Share this post


Link to post
Share on other sites
Pixel shaders can't control where they're writing to, since that would break many of the rules pixel shaders observe in order to keep them fast (and to also avoid race conditions). So your options are:

A. In the pixel shader, sample all of the texels from a single column, sum them together, and output them

or

B. Render N horizontal strips where N is the height of your 2D texture, and for each strip sample a single row from the source 2D texture. If you enable additive blending, the output will be summed together.

I would suggest trying both and profiling to see which is faster for your particular use case + target hardware.

Share this post


Link to post
Share on other sites
Thankyou for your replies,

I had the idea for a 2D game scenario to try to create a 1D depth buffer for shadows, using the native operation of a depth buffer to do the sorting, while using a 2D texture with correct distance radial information stored in it. I was thinking of using the native speed of the sampler to do the hard work instead of a loop, but I guess a loop is the only sensible option, but unfortunately it causes too much of a speed hit to be considered as a serious approach on my hardware.

I know there are other methods that work well, I was just trying to think of something new.

Thanks again.
Dave.


Share this post


Link to post
Share on other sites
You can implement MJP's solution B without doing manual looping in either CPU or shader (although the rasterizer will still loop over your geometry internally). It is possible to render all N lines with a single draw call. You don't even need a vertex buffer for this; just output the positions and texture coordinate from the VS based on the incoming vertex id.

As for optimizing the column sampling, consider that you can take advantage of high-performance hardware linear filtering by taking samples between source texels (instead of exactly at source texels). This works because[font="Courier New"] (tex.Load(int2(0,0)) + tex.Load(int2(0,1))) [/font]is approximately (and very nearly) equal to [font="Courier New"](tex.Sample(bilinearFilterState, float2(0,texelSize.y*0.5f)) * 2) [/font]. It isn't exactly equal as one bit of precision is lost on the multiplication by two, but this only affects you if you require float values from the texture.

In some hardware (and some scenarios), bilinear filtered sampling is "free" because the operation can have dedicated circuitry and because the affected texels are likely in cache anyway. By using this to your advantage, you can potentially halve the performance impact by taking samples between 0 and 1, 2 and 3, 4 and 5; and so on, instead of explicitly loading texels from each row separately.

It would probably be possible to exploit anisotropic filtering the same way, although the setup would require more work and even more precision loss would occur.

Share this post


Link to post
Share on other sites
Thanks Nik02,

I thought I'd thought of everything and then you suggest that :D
I thought MJPs suggestions involved manual looping.
Although I'm not a complete newbie with HLSL, I'm no master either. I'll give it a go and report back here, whether I get it working or not.


Thanks again.

Dave.

Share this post


Link to post
Share on other sites
Hmm,

I have to eat humble pie as I can't get this to work. What I'm struggling with is conceptualising how to send the correct uv ( or v only ) values from the vs to the ps. I send in the current vertex id, but then how would that equate to the ps rastering correctly across the incoming texture?
Maybe I need an advanced texturing tutorial to teach me the fundamentals of which the vs and ps work together?

All help is very much appreciated.
Many thanks

Dave.

Share this post


Link to post
Share on other sites

Hmm,

I have to eat humble pie as I can't get this to work. What I'm struggling with is conceptualising how to send the correct uv ( or v only ) values from the vs to the ps. I send in the current vertex id, but then how would that equate to the ps rastering correctly across the incoming texture?
Maybe I need an advanced texturing tutorial to teach me the fundamentals of which the vs and ps work together?

All help is very much appreciated.
Many thanks

Dave.





You could always use a vertex buffer with a list of horizontal lines - using the vertex id to generate the lines is just an optimization.

If you want to use the vertex id to generate the stack of lines, consider that each line has exactly two endpoints. Therefore, even vertex id:s (0, 2, 4...) represent one end of the line, and odd id:s (1, 3, 5...) the other end. The logic of the vertex shader would be to check if the id is odd or even, and based on this, output a vertex that is on the leftmost or the rightmost side of the render target. Do note that the normalized viewport coordinate range is from -1 to +1, and not the actual render target dimensions.

The pixel shader doesn't have to know how vertex shader calculated its output, so it will be the same regardless of whether you use a vertex buffer or calculate the vertices on the fly.

To send texture coordinates from VS to PS, use an output structure or an output parameter that has the semantic TEXCOORD0. For one-dimensional texture coordinate, a float is sufficient. This part is just like in ordinary texturing. The texture coordinate can be calculated on the fly in VS too - just apply a bit of math given the vertex id and the known maximum number of lines.

I can write some sample code today or tomorrow, if needed.

Share this post


Link to post
Share on other sites
Hiya Nik02,

I was using the DrawInstanced call to generate the geometry (256x1 (dest) quads for a 256x256 (source)), then using the SV_InstanceID and some math to generate the v coord, and just pass through the same u coord, these uv would to read from the source texture. I did get it working when (as a test ) copying from a 256x256 to a 256x256 using


struct ShadowMapPixelInput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
float instID : TEXCOORD1;
};

float4 inCol = inTex.Load(uint3(input.tex.x*256.0,input.tex.y*256.0,0));



but when I use the SV_InstanceID to get the v value it just doesn't work


float4 inCol = inTex.Load(uint3(input.tex.x*256.0,instID,0));



In PIX it shows that the value output from the vs for instID : TEXCOORD1 is the value of the SV_InstanceID ( integer cast to a float ), but not in the ps. Am I using it correctly in the Texture.Load function? PIX says that the value may be incorrectly shown as its an SV_ value.

Thankyou for your help.
EDIT :- Lol. I understand now! When using the instID the v data is the same for each vertex which is incorrect. I'll re-work the math and try again.

Dave

Share this post


Link to post
Share on other sites
My technique doesn't require instancing, nor sending any geometry identifiers to PS. You don't need to draw quads - simple lines (primitive type = line) will suffice.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!