I am trying to figure out HDR so I can implement it in my engine, please let me try to explain what I know, correct any mistakes and awnser the questions along the way.
So let me see if I get the pipeline right...
Normally I would render a scene to the backbuffer which is in D3DFMT_X8R8G8B8 format. The texture sampler to color meshes returns ARGB float values between 0.0f and 1.0f as colors in HLSL do not range from 0 to 255. Instead, they range from 0 to 1. So if the sampled texture is in D3DFMT_A16B16G16R16 format the 32bit float in the shader will be able to hold all that information. The 32bit channel float must then be truncated down to an 8bit channel to fit the backbuffer which happens after the fragment shader?
Now in HDR (deferred shading) i want to create a rendertarget texture in D3DFMT_A16B16G16R16 format and render the scene to it. My shaders still outputs a color between 0 and 1 but the precision is a 32bit float for each channel. That 32bit float output must then be truncated down to fit in a 16bit channel in the texture render target?
The pipeline is:
Render Scene to HDR texture -> Render quad with the HDR texture to backbuffer.
The colors are still between 0 and 1 in both situations but in two different precisions, making room for smoother color transients. So even with 100 light sources the intensity will never go beyond the value 1.0f?
Now, to get that neat effects I need to follow this pipeline:
Render Scene to HDR -> Tone-mapping HDR texture -> Render quad with the HDR texture.
By tone-mapping we take the HDR texture and scale values down using the exponential function
float4 exposed = 1.0 - exp( -(unexposed * K ) );
Where unexposed is the raw channel value between 0 and 1 and the exposed is also a value between 0 and 1? I thought that from reading guides that the unexposed value from the HDR texture would be between 0 and infinity..... The K is the Luminance adaption value.K is obtained from the previous frame by making a second HDR render target and outputting the Log2(exposed) in the fragment shader and mipmapping that down to a 1x1 texture. This 1x1 texture has RGB values which is weighted avaraged and taken the the exponent of to get K.
Can K be obtained by having a global variable instead of another render target that each fragment adds its Log2(exposed) value to? this value would then be divided down by the number of pixels in the application?
The Render Scene to HDR -> Tone-mapping HDR texture stages can be merged in one go but just adding the above code at the end of the pixelshader.
The pipeline is now:
Render Scene to HDR with Tone-mapped values -> Render quad with the HDR texture.
This pipeline will now bring out dark and light areas in the scene, instead of having the scene over- or under-exposed.
To get that Bloom effect...
This is simply done by taking a copy of the current scene, blurring it, and adding the blurred version back to the original. To make this fast, you usually scale down the scene to a texture that is for example 1/4 of the original, and then blur that. Another thing that you do is that you only want the brightest pixels to glow, not the entire scene, therefore, when scaling down the scene, you usually also subtract a factor from the original pixels, for example 1.0, but you can use whatever looks good. The smaller this factor is the more parts of your scene will glow and vice versa.So I guess I need to make another HDR render target, and output a darkened color to it. Mipmap it down and perform a Horizontal guassian blur pass and a Vertical Gaussian blur pass. Then add the two guassing passes in a third/fourth pass to produce the final scene on the backbuffer.
The pipelines are now:
RenderTarget0 HDR (Tone-mapped with K from previous frame) -> HDR scene
RenderTarget1 HDR (Luminance/Log2/K) -> mipmap down to 1x1 -> exp(dot( color.rgb, float3( 0.333, 0.333, 0.333 ) )) -> K
RenderTarget2 HDR (mipmapped for bloom) -> Mipmap 1/4 size with darkened colors -> Vertical blur
RenderTarget3 HDR (mipmapped for bloom) -> Mipmap 1/4 size with darkened colors -> Horizontal blur
BackBuffer = HDR scene + BilinearUpscale(Vertical blur + Horizontal blur)
This sums up to 3 or 4 passes?
Woah! Am I thinking correctly? Please let me know what I have done wrong and possibly suggest better pipelines.