[HLSL] How to get color value from the previous pass?

Started by
6 comments, last by g0nzo 17 years, 7 months ago
Hi! I'm doing a gaussian blur. If I want to make a single technique that would have 2 passes (for horizontal and vertical blur), how would I read the result from the first pixel shader pass in the second one?
Advertisement
Most likely you'll have to render the first pass into a texture, then bind that texture for the second pass. The only way to read/write a pixel at the same time (i.e. read from the surface you are currently writing to) is via the alpha blend mechanism.

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

What superpig said, or if possible make it a single-pass technique and do your combination in shader.
Thanks.

That sucks [smile]. I thought that what multiple passes are for - so I won't have to swap textures manually.

I need to have 7 gaussian blurs, each with different kernel size. Do I have to create 7 different arrays (each for specific kernel size) for offsets and weights or can I just create one of size of the largest possible kernel and just don't fill it completely?

Another thing - is there some way, so I won't have to write 14 techniques (7 horizontal passes and 7 vertical)? It's just copy&paste, but it will make my shader less readable.

[EDIT] One more thing: if I have a shader and I'm setting some parameters, then render something using technique A and later I'm rendering something using technique B, if both techniques use the same paramters, that don't change during single frame render, do I have to pass them again to the technique B?

[Edited by - g0nzo on September 18, 2006 2:43:25 PM]
Quote:Original post by g0nzo
That sucks [smile]. I thought that what multiple passes are for - so I won't have to swap textures manually.
No, the problem is that without multiple passes, you can't swap textures at all, let alone manually. [smile]

Quote:I need to have 7 gaussian blurs, each with different kernel size. Do I have to create 7 different arrays (each for specific kernel size) for offsets and weights or can I just create one of size of the largest possible kernel and just don't fill it completely?
I'd probably create arrays containing the requisite filter weights. And you don't necessarily have to use fewer samples in a smaller kernel provided you scale the weighting accordingly - the kernel is, after all, supposed to be a continuous curve, that we're sampling at discrete intervals. The number of intervals doesn't need to decrease just because the kernel is narrower.

Quote:Another thing - is there some way, so I won't have to write 14 techniques (7 horizontal passes and 7 vertical)? It's just copy&paste, but it will make my shader less readable.
Only write one technique, and pass both the weights and the texture coordinate offsets in as array parameters. The kernel is seperable (right?) so you can treat horizontal and vertical blurs in exactly the same way - you just swap the x and y on the texture coordinate offset array.

Quote:One more thing: if I have a shader and I'm setting some parameters, then render something using technique A and later I'm rendering something using technique B, if both techniques use the same paramters, that don't change during single frame render, do I have to pass them again to the technique B?
I think that for two techniques in the same effect (at least under D3DXEffect, I assume that's what we're talking about here) then you're right, you don't have to pass the parameters again. If you want to preserve parameters across different effects entirely then you will need to use effect pools.

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Thanks, but I'm not sure I understand it correctly.
Quote:
I'd probably create arrays containing the requisite filter weights.

So I should create many arrays - one for every set of weights (standard deviation is increased in every iteration)?
Quote:
And you don't necessarily have to use fewer samples in a smaller kernel

So I should use the same number of samples no matter what the kernel size is? I.e. if I have very small standard deviation (in the first iteration), the weights will quickly go to 0. What's the point in sampling pixels and then multiply them by weight equal to 0? I know it won't change the result, but it will be a big hit on performance, right?
Quote:Original post by g0nzo
Thanks, but I'm not sure I understand it correctly.
Quote:
I'd probably create arrays containing the requisite filter weights.

So I should create many arrays - one for every set of weights (standard deviation is increased in every iteration)?
You have one array on the shader side, but many different sets of values that your program code uploads into that array.

Quote:
So I should use the same number of samples no matter what the kernel size is? I.e. if I have very small standard deviation (in the first iteration), the weights will quickly go to 0. What's the point in sampling pixels and then multiply them by weight equal to 0? I know it won't change the result, but it will be a big hit on performance, right?
No, you don't use pixels that have zero weight - you decrease the gaps between the samples instead. Though in retrospect it's a bit pointless for finite-resolution textures. [smile]

Richard "Superpig" Fine - saving pigs from untimely fates - Microsoft DirectX MVP 2006/2007/2008/2009
"Shaders are not meant to do everything. Of course you can try to use it for everything, but it's like playing football using cabbage." - MickeyMouse

Thanks again.

Can I pass size of the kernel to the pixel shader, so that it will know how many samples to take?
for( int i = 0; i < kernelSize; i++ )  result += tex2D(luminanceTextureSampler, input.TexCoords + float2( blurOffsets.r, 0.0f)) * blurOffsets.g;

If it's impossible, then I still can't figure out how can I write it using just one technique and don't perform unnecessary texture reads.

[EDIT]
Another idea [smile]
Could I have a single function PS_GaussianConvolution, which would take float2(offsets, 0) (or float2(0,offsets)), weights and kernel size and then have 14 techniques like:
technique Gaussian3x3Horizontal{  pass P0  {    VertexShader = null;    PixelShader  = compile ps_2_0 PS_GaussianConvolution(float2(offsets, 0.0f), weights, 3);  }}

?

[Edited by - g0nzo on September 20, 2006 5:57:45 AM]

This topic is closed to new replies.

Advertisement