• Advertisement
Sign in to follow this  

HDR Help

This topic is 2042 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So,

I'm starting to implement HDR in my scene. I have a few questions after reading online and going through the HDR_Pipeline demo in the SDK.

I setup my render target texture as D3DFMT_A16B16G16R16F. How do I actually get those extra bits of color? If I render my scene as-is and then render that texture out to the screen, it's exactly the same.

Question 1:
The HDR_Pipeline actually has a pixel shader that multiplies each RGB by a scalar factor. Is this how HDR is done? So, every object (or really every shader) in my scene now has to multiply it's pixel RGB by some magical scalar factor so I can later use that for my luminance calculation? Is this how everyone does it, take the result RGB from a model and multiply it by some scalar to get over 1.0 results?

Question 2:
Is there anyway faster to get the average or log luminance value other than downsampling? If not, do people generally start at 256x256 for a total of 9 downsamples? The HDR_Pipeline used 3x3 downsamples instead of 2x2, is that better?

Thanks!
Jeff.

Share this post


Link to post
Share on other sites
Advertisement
I setup my render target texture as D3DFMT_A16B16G16R16F. How do I actually get those extra bits of color? If I render my scene as-is and then render that texture out to the screen, it's exactly the same.
Question 1:
The HDR_Pipeline actually has a pixel shader that multiplies each RGB by a scalar factor. Is this how HDR is done? So, every object (or really every shader) in my scene now has to multiply it's pixel RGB by some magical scalar factor so I can later use that for my luminance calculation? Is this how everyone does it, take the result RGB from a model and multiply it by some scalar to get over 1.0 results?
[/quote]

So, LDR (low dynamic range) is when the light values are in the range [0, 1] while HDR (high dynamic range) is when light values are in the range [0, infinity). The D3DFMT_A16B16G16R16F texture will hold [-infinity, +infinity], so the texture format doesn't need any work to hold extra data. What needs to change is two things. First, the lights in your scene must actually add up to something past 1 (or start at a value greater than 1). Second, you must realize that no matter what your render target supports, the computer screen only supports [0, 1]. You need to map [0, infinity] to [0, 1]. This conversion is called "tone mapping".

MJP has a really greate demo here: http://mynameismjp.w...ss.com/2010/04/

There is also a great book on GameDev that has tons of information, but I can't seem to find it at the moment.

Question 2:
Is there anyway faster to get the average or log luminance value other than downsampling? If not, do people generally start at 256x256...[/quote]

Not really, you can start sampling into any size texture, but anything too small will not be 100% accurate. MJP did also have a demo that did this in a compute shader, but I'm not sure that is exactly what you're asking for. ( http://mynameismjp.w...ss.com/2011/08/ )

Share this post


Link to post
Share on other sites

First, the lights in your scene must actually add up to something past 1 (or start at a value greater than 1).

Why ? If you have a very dark scene , the whole scene got lightened up. You can explain it with the ability of humans to see better in the darkness after some time.

Is there anyway faster to get the average or log luminance value other than downsampling?

There's always some overhead when switching the render target, setting up the shader etc. , therefore a 3x3 downsample could be faster, because of reducing the layers faster.

If not, do people generally start at 256x256 for a total of 9 downsamples? The HDR_Pipeline used 3x3 downsamples instead of 2x2, is that better?

You can use some tricks like hardware filtering, that is, you can sample 4 pixels with just one sample (when linear filtering is on). A 2x2 tap would in fact sample a 4x4 area, this is:
256->64->16->4 and the last 4x4 texture can be sampled at the center.

Or when useing a 3x3 tap:
4<-24<-144<-864 Edited by Ashaman73

Share this post


Link to post
Share on other sites

Is there anyway faster to get the average or log luminance value other than downsampling? If not, do people generally start at 256x256 for a total of 9 downsamples? The HDR_Pipeline used 3x3 downsamples instead of 2x2, is that better?

In my engine, I used a Fast Fourier Transform to get a fast ([eqn]O(log(n))[/eqn] per pixel) Bloom that actually is able to influence the whole screen and has a non-separable filter while still having the performance of a separable filter. I didn't need to calculate the average luminance, cause it's exactly the same as the value at the frequency 0 of the FFT transformed image. So it's basically free in my case, and even more correct than if it were calculated by downsampling.

If I'd need to implement it, though, I'd probably write 2 compute shaders. The first compute shader dispatches thread groups for every row, while every thread group has threads for each pixel in that row. Every thread begins by reading it's associated pixel and stores into groupshared memory. Than, every thread begins by adding 2 values together and than storing the result in groupshared memory again. Now you do that again, until only 1 value remains (also only half the threads are actually adding values every time. Be sure to release the unnecessary warps). You than divide the resulting value by the number of elements and store it in a Texture1D (They should definitely add some fast intermediate memory to DX12, so that you don't have to store buffers like this in global memory). The second compute shader basically does the same thing, but is only dispatched once (and has as many threads as the image has rows).

This should perform way faster than downsampling a image multiple times, because of the multiple passes needed for downsampling and the resulting slow writes to global memory (as I said, we need intermediate memory for DX12).

Update: Oh, I actually did the same thing MJP and NVidia did, without reading their work xD (MJP used a 2-dimensional approach though, but that shouldn't result in any difference at all. But MJP's improved version might cause bank conflicts on NVidia hardware, because of the 128-bit strided access). Edited by CryZe

Share this post


Link to post
Share on other sites
allingm,

Thanks for the response.

So, my lighting calculation would have to change in the shader. Currently it's...

saturate(IN.SunLight + gAmbient) * pixCol;

So, would I have to get rid of my saturate and just combine ambient + light for proper HDR?

Thanks
Jeff.

Share this post


Link to post
Share on other sites
Yes, saturate makes everything low dynamic range. Never clamp your lighting data if you want it to be high dynamic range.

Share this post


Link to post
Share on other sites
saturate(IN.SunLight + gAmbient) * pixCol;[/quote]
The saturate is a problem. To make it properly put the color into the proper range you should start with the Reinhard tone mapper. You can experiment with other ones once you get it working.

Reinhard tone mapper:
y = x / (1 + x)

You can plot this in wolfram alpha to see that it does indeed map the range [0, infinity] to [0, 1]
http://www.wolframal... x), x = 0, 100

Keep in mind this is the "classic" tone mapper, but it doesn't necessarily create the most pleasing results. However it should be plenty for learning.
http://filmicgames.com/archives/183

Share this post


Link to post
Share on other sites
Yes, the data inside your HDR texture can't sensibly be displayed directly to the screen -- if you do, and if there's bright lights, then things will just look white.

Now that you've got your HDR data, you've got to tone-map it back down to "LDR" in order to display it. Allingm posted a very simple tone-mapping function above -- you can use that in a post-processing pass that reads your HDR texture as input, performs that function, and outputs a regular 888 RGB value.

Share this post


Link to post
Share on other sites
I'm confused.

I have my HDR texture. I calculate my average luminance value (by down sampling and averaging).

What does my pixel shader use the luminance for?

y = x / (1 + x)

does my pixel shader look like (HLSL):

hdrCol = tex2d(SampHdr, texCoords);
hdrCol.r = hdrCol.r / (1.0 + hdrCol.r); ??
hdrCol.g = hdrCol.g / (1.0 + hdrCol.g); ??
hdrCol.b = hdrCol.b / (1.0 + hdrCol.b); ??

Where goes the luminance?

Currently I'm using the HDR_Pipeline's method (from the SDK)

final = hdrPixel;
fExposure = 1.0;
fGaussianScalar = 1.0;
float Lp = (fExposure / l.r) * max( final.r, max( final.g, final.b ) );
float LmSqr = (l.g + fGaussianScalar * l.g) * (l.g + fGaussianScalar * l.g);
float toneScalar = ( Lp * ( 1.0f + ( Lp / ( LmSqr ) ) ) ) / ( 1.0f + Lp );
c = final * toneScalar;

Thanks
Jeff.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement