Copying from 2DTexture to 1DTexture

Started by
14 comments, last by auto.magician 12 years, 5 months ago
Okay, here is some sample code I wrote during coffee break:

[source language='hlsl']

struct VS_OUT
{
float4 pos: SV_Position;
float2 tc: TEXCOORD0;
}

cbuffer consts
{
int sourceLines; // how many total lines in source texture

// NOTE: we assume that source and destination widths are same, so no coefficients for scaling between them
}

// vertex shader. note that we don't read from vb since no ia-supplied parameters are present.
// draw with (sourceLines / 2) amount of line primitives for this to make sense
VS_OUT VS(vid : SV_VertexId)
{
VS_OUT output = (VS_OUT)0;

bool isEven = ((vid % 2) == 0); // are we on even vertex (true) or odd vertex (false)

int sourceRow = vid;
if (!isEven) sourceRow -= 1; // we want right end to have the same value as left end here

// in effect, vid 0 and 1 have the same source row index now; same with vid 2 and 3, 4 and 5...

// sourceRow /= 2; // divide balanced vertex id pair by 2 to get final row id
// no need to do that though, since we are going to be sampling in the interval of two anyway :)

// write common data for each output:
output.pos = float4(0.0f, 0.0f, 0.5f, 1.0f);
output.tc.y = ((float)sourceRow + 0.5f) / ((float)sourceLines); // sample at midpoints between 0 and 1, 2 and 3, 4 and 5...

// write even/odd dependent data:
if (isEven) // even vertex id, write left endpoint of line
{
output.pos.x = -1.0f; // left edge of render target
output.tc.x = 0.0f; // texture's left edge
}
else // odd vertex id, write right endpoint of line
{
output.pos.x = 1.0f; // right edge of render target
output.tc.x = 1.0f; // texture's right edge
}


return output;
}

// sampler state that specifies linear magnification filter:
sampler2d LinearSamplerState
{
Filter = MIN_POINT_MAG_LINEAR_MIP_POINT;
// wraps can be at default because we don't read past source 0...1 range
}

Texture2D sourceTex; // source texture, rows of which we want to accumulate into one destination row


// pixel shader (draw with additive blending and z read/write off for this to accumulate the source rows):
float4 PS(VS_OUT input) : SV_Target0
{
float4 color = sourceTex.Sample(LinearSamplerState, input.tc);
color *= 2.0f; // to compensate for averaging caused by between-row sampling
return color;
}[/source]

Sorry for possible bugs, I wrote this in notepad and haven't compiled or tested it. The basic logic should be ok.

Niko Suni

Advertisement
Wow. Thankyou. I didnt expect you to knock up some code like that, and I really appreciate you taking time out on your coffee break to do it. I've read through it and I do understand what you're doing. I've also learned about generating vertices within the shader from it. I'll have a go in the next few days as I'm working ridiculous hours at the moment. ( non IT related ).

Thanks again.

Dave

No problem, glad I could be of help.

While it is merely an optimization here, generating vertices in the shaders is a powerful way to draw procedural geometry without having to allocate any persistent memory to it. You can imagine the possibilities when considering that - for example - you can run pseudorandom generators seeded by the vertex id and some external constants. And if you combine this with instancing and geometry shaders (and geometry shader instancing & tessellation in sm5+), you can generate massive amounts of visuals with very little memory consumption.

The technique has a drawback with tooling at the moment, though; PIX will crash when debugging calls that use vertex shaders without bound vertex buffers. Even though this is a valid usage pattern (down to the spec), the tool assumes that any drawing op also has to have at least one vertex buffer and will cause an access violation when accessing a non-existent first buffer. I assume the next version of PIX has this fixed. The technique is very commonly used to generate the vertices of a screen-space quad or triangle for rendering post-processing effects.

Niko Suni

Hiya.

With your help, I got this working as I'm now able to create a 1D depth map from a 2D image using the depth buffer for depth testing. It works great, so a big thankyou.
But I now have another problem which again I think is to do with the texture coords:-
I want to use the information from the 1D depth map and build a 2D image from that information. I'm successfully sampling from the 1D depth map as shader resource, but my problem comes with that depth information is being applied to the 2D texture only on the diagonal from 0,0 to 1,1. I thought the rasterizer would sample the 1D texture only at the specified incoming u coord, and apply it to all vertical lines? This is a cut back version of the sampling :-



Texture1D inDepth;

struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
}

float4 ShadowMapPS(PS_IN input)
{
[...]
float shadowMapDistance;
shadowMapDistance = inDepth.Sample(Sampler,input.tex.x).r;
[...]
float light = shadowMapDistance;
[...]
return float4(light,light,light,1);
}


I'm passing a standard full quad with standard uvs in the vertex buffer ( I understand your optimisation above, but I need the tools :) ), I've also tried changing the uv y coord for the 4 vertices to force it to sample from various fixed v, ie 0,0 or 0.5, but the 2d sampled texture just gets affected only across the diagonal, as if its sampling correctly but writing to the same v coord as the u coord.

Your help is always appreciated.
Dave.
Verify your vertex shader logic regarding the texture coordinates. PIX can help you here - check the vertex table on the quad. Also, you can check all the intermediate textures visually to see if they match the expected content.

A good way to verify texture coordinates visually is to render them as color; for example, u as red and v as green.

Although it wouldn't explain the diagonal pattern, do verify that your 1d texture datatype has enough precision to store the accumulated data in the first place. 32-bit floating point should be enough here.

Do you have an image of the behavior you're describing?

Niko Suni

Oh my god!!
After nearly half an hour writing a lengthy explanation and making some detailed pics :-
I've been tripped up with the shader compilation! In the shader file I have 2 texture definitions and various shader functions. In my code I was thinking about the 2 texture slots and setting the depth resource to slot1 ( as opposed to slot0 ) in the PSSetShaderResources function, not realising that because the first texture isnt used in this particular shader function then it gets culled out of the compilation, so the shader ends up using something from I don't know what. Must be a different texture from a previous stage of the process! LOL.



I've been tripped up in the past with that already.

Thankyou for replying, I feel so dumb-founded :D

Dave

This topic is closed to new replies.

Advertisement