Jump to content
  • Advertisement
Sign in to follow this  
Moldie

Persisting variables in HLSL?

This topic is 4860 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

How do I make a variable to persist between invocations of the HLSL-pixelshader? I'm trying to make an application that adds the luminance value of each rendered pixel to a variable within the pixelshader, to be used for calculation of an image-key for HDR-tonemapping in a final scene. As the 'static' keyword of a global variable makes it invisible to the outside application, I defined it in local scope, but it won't persist. Have I missed anything? Peter

Share this post


Link to post
Share on other sites
Advertisement
Granted: I've hardly ever worked with shaders (plan to get around to it soon though) but I don't think that is actually possible.

However, can you not render the scene to a texture, perform the operation and save the final value to that pixel in the texture, which can be sampled later?

Share this post


Link to post
Share on other sites
You can't do this, since that would make the shader unparallelizable. However, I've seen people using mipmap generation for this sort of thing: they render to a texture, then use the one texel of the lowest mipmap level as the average luminance in future passes.

Share this post


Link to post
Share on other sites
Take a look at the HDRLighting sample that comes with the SDK.

You can't really do this with shader variables. To add the luminance values for HDR, you'll need to successively downsample the original HDR surface down to a 1x1 texture.

For example, the way I've done this is to create render targets that are the closest power of 3 to the original HDR image. For example, if the original HDR image is, say 800x600, then i would create the following targets: 729x729, 243x243, 81x81, 27x27, 9x9, 3x3, and 1x1.

What you do is you take your original 800x600 image set it as an input texture. Then you render a screen-aligned quad that is exactly 729x729 and do a 3x3 texture fetch on the HDR target, performing any calculations you need, and storing the result in the 729x729 target.

Then this 729x729 target is set as the input texture and you render a screen-aligned quad that is 243x243. You keep doing this until you get down to a 1x1 target. This texture is then the input to your final tonemapping pass.

You don't have to use a 3x3 neighborhood, but i found that for me, that was the best tradeoff. larger neighborhoods require more texture fetches, but smaller neighborhoods require more passes.

Take a look at jollyjeffers HDR Demo. That does pretty much the exact same thing and might be a little easier to understand than the HDRLighting sample in the SDK.

neneboricua

Share this post


Link to post
Share on other sites
Since I'm storing the pixel's luminance value, I'm using the format D3DFMT_R32F for the textures I'm generating. Wouldn't it be possible for Direct3D to autogenerate the 1x1-level when I render to a 512x512-texture for example? And if there was a way to pick this 1x1-level as the texture for the final scene. Maybe I'm just dreaming away here, hehe. Am I compelled to render the levels to separate textures?

Peter

Share this post


Link to post
Share on other sites
If i remember correctly, I don't think you can generate mipmaps for floating point render targets. But even if you could, the operation would be limited to averaging the pixels of the image.

You can try it and see if it works.

neneboricua

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!