HDR lighting + bloom effect

Started by
18 comments, last by simonjacoby 14 years, 11 months ago
Hi! I want to implement hdr lighting to my engine, but i just can't find a good tutorial for managed directx to show me how to do it. Bassicly, i understood what i have to do: 1. render the color data to a texture (color can exceed 1.0f) 2. resize the texture (make it smaller) 3. set the areas that are below a luminance level to black (0) 4. blur the texture 5. scale the texture to its original size 6. render the original texture combined with the resulting texture from step 5. 7. get the maxium luminance level of the texture resulting texture from step 6. 8. divide the texture colors by the maxium luminance 1st question: are these the right steps to perform a hdr rendering? 2nd question(s): if they are, how do i render the color data to a texture?, how do i resize the texture?, how do i get the luminance of a pixel?, how do i blur the texture?, how do i combine 2 textures?, how do i get the maxium luminance level of a texture? 3rd question: can all those operations be described in one .fx file? (i mean i don't want to do any of this operations within my application code (i don't want to use the cpu)). thx in advice.
Advertisement
you can start with the examples that are in the DirectX SDK. They are a good starting point to do your own stuff.
upps I just saw that are you are specifically looking for managed DirectX ... I think it would be good to check out the C++ stuff first.
What you say there is a good starting point. You will have to use render targets with a higher resolution then 8-bit per channel and then there are lots of small little challenges attached to each of the points you mention in 1 - 8..
The level of detail you will go into will decide on how good your HDR pipeline is at the end.
Gamma correction will be also a major topic to look into.
First of all....why MDX? It's a dead project that's no longer included in the SDK, and will never be updated again. These days if you want to do DirectX with a managed language your only real choices are SlimDX and XNA. The former is (very good) wrapper of DX9/DX10/DX10.1, while the latter is a wrapper of DX9 combined with other game-related framework utilities (it's also compatible with the Xbox 360 and Zune).

Secondly, I agree with wolf that the SDK samples are a good place to start. In fact there's actually a managed port of the HDR Pipeline sample that happens to be the first when when you search on google for "HDR sample MDX".

You might also want to check out this, it's a good overview.
EDIT: sorry, I was still writing when you guys answered :)

Hi,

you've got some of the concepts mixed up. From your description, it sounds like you're trying to do three things:

1. Render HDR
2. Perform automatic luminance adaption during tone-map pass
3. Add a bloom effect

Here's a brief explaination on how you do each step, and why:

HDR rendering: this is the source of a lot confusion, mainly because it's one of those buzzwords that gets thrown around alot. Here's what it means in practical terms:

When you draw stuff "regularly", you usually do that to a color buffer where each channel is eight bits (for example RGBA8). This is fine for representing colours, but when you're rendering 3D-stuff you really need more precision, because your geometry will be lit and shaded in various ways, which can cause pixels to have very high or very low brightness.

The way to fix this is simply to render to a buffer that has higher precision in each color channel. That's it. Instead of just using 8 bits, use more. One format that is easy to use and has good enough precision is half-float format, in D3D lingo D3DFMT_A16R16G16B16F. Because of limitations of the GPU, you usually can't set the backbuffer to a high-precision format. So instead, you create a texture with this format, render to it, and then copy the result to the backbuffer so it can be shown on your screen.

So, all you have to do is to create a texture with this new format (for example), and bind it as the render target instead of the default back buffer. Let's call this texture the HDR render texture. When you have created it and set it as render target, just draw as usual. When you're done rendering, copy the pixels in the HDR texture to the old back buffer to show it. The copy is usually done by drawing a full screen quad textured with the HDR render texture over the back buffer. When you've done this: voilá! Your very first HDR rendering is done :)

If you've done this correctly, the first thing you will notice is that there has been no improvement at all to your regular rendering ;) This is because we haven't done any of the cool stuff that higher precision enables us to do. Some of the most common things people do are bloom, exposure, vignetting and luminance adaption (exposure, vignetting and luminance adaption are usually called tone-mapping when used together).

Here's what the are, and how you do them.

Exposure: there's a great article written by hugo elias that explains it much better than I could do here: http://freespace.virgin.net/hugo.elias/graphics/x_posure.htm
In practice, that article boils down to a single line of code at the end of your shader:

float4 exposed = 1.0 - pow( 2.71, -( vignette * unexposed * exposure ) );

where 'unexposed' is the "raw" pixel value from your HDR texture, 'vignette' is explained below, and 'exposure' is the constant K in hugo elias article. In my code it's simply declared as:

const float exposure = 2.0;

...because 2.0 makes my scene look nice. You may have to use a different value that look good for you, if you decide to implement exposure. If you want it a bit more robust, You can make this happen automatically, it is described in 'luminance adaption' below. Also, know that there are several ways of performing the exposure, with different formulas, which result in different images. The Hugo Elias one is an easy way to get started though.

Vignetting: Because a lens in a camera has a curved shape, it lets in less amount of light at the edges, so many photos or films (especially on cheap cameras) have noticably darkened edges. See example here: http://mccollister.info/vignette70.jpg. This effect is called vignetting. It is simulated with two lines of code:

float2 vtc = float2( iTc0 - 0.5 );
float vignette = pow( 1 - ( dot( vtc, vtc ) * 1.0 ), 2.0 );

...where iTc0 are the texture coordinates of the full screen quad, and ranging from 0..1. The result is a factor that is 1.0 in the center of the screen and becomes less as it moves away from the center.

Luminance adaption: this is part of the exposure, but can be done separately. In Hugo's code, the constant K (and in my code the variable 'exposure') the exposure is fixed, meaning that you have to tweak it manually for a scene to look good. If a level varies a lot in brightness (for example you are standing in a dark room and then walking outside to a sunny day), no value of K will work very well for both scenes (the sunny outside may be 10,000 times (or more) brighter than the dark inside). Instead, you need to measure how bright the scene is so you can adjust K accordingly.

The easiest way to do this is to take the average of all pixels in the HDR texture. One way to do this that is fast is to make mip-maps of the HDR texture, all the way down to a 1-pixel texture. This final one-pixel texture will then contain the average of all the pixels above, which is the same as the average scene luminance. Use this value as K when doing the exposure. You simply do this by using the 1-pixel texture as input to the exposure, instead of the hardcoded K (or 'exposure' as it's called in my code example). You will need to tweak it to look good and adapt the way you want, but when it's done your renderer can handle all kinds of brightnesses, which is very cool :)

Finally, there's the blooming: i'm sure you already know what this is, simply making bright parts of the scene glow a bit. This is simply done by taking a copy of the current scene, blurring it, and adding the blurred version back to the original. To make this fast, you usually scale down the scene to a texture that is for example 1/4 of the original, and then blur that. Another thing that you do is that you only want the brightest pixels to glow, not the entire scene, therefore, when scaling down the scene, you usually also subtract a factor from the original pixels, for example 1.0, but you can use whatever looks good. The smaller this factor is the more parts of your scene will glow and vice versa.

Whoa, long post :)

While this probably seems pretty complicated, depending on what look you want for your game and the type of game you are creating you can decide on implementing all of this or just some of it. The modern FPS games and racing games implement most of this above, but if you just want to make a simple space shooter with some nice glowing effects, all you have to implement is the "render to high-precision-texture"-part, and the bloom part.

For starters, you should probably just try that, and then add the other effects as you get more comfortable.

So, to answer your questions:

1. It depends, see above :)
2. You render color data to a texture by creating a texture with usage D3DUSAGE_RENDERTARGET and a high precision pixel format, and then setting it with device->SetRenderTarget( 0, m_hdr_rt_tex );
3. You resize by creating more mip levels for your texture, and rendering to them. Don't forget to set the ViewPort to match the mip level size.
4. One simple way of getting the luminance is simply averaging the color channels together. This can be done with a dot product, like so:
float lum = dot( color.rgb, float3( 0.333, 0.333, 0.333 ) );
Some people like to weigh the different channels differently with more on green and a lot less in blue, but in practice nobody ever notices the difference unless you point it out ;) Feel free to experiment :)
5. You blur a texture by averaging several nearby samples together.
6. It depends, but when adding bloom you usually just add the color values.
7. Described above.
8 (3rd? :)). Yes, you can have everything in one .fx-file, create each one as a technique (downsample_technique, bloom_combine_technique etc).

Best of luck!

Simon
superb post Simon.
wow.... thx for the fast answers.

K. So, to answer to ur'....answers :).... i'm working on a rts game, and now i'm doing the map editor. the reason that i'm using mdx is because i don't know how to add butons, panels, and so on in native c++. (the game application is written in native c++. i've done the start screen and now i have to do the level editor).

I've looked at the samples that come up with the SDK, but i simply don't understand them. I started studying HLSL 2 days ago (i follow the www.riemers.net tutorial).

thx for the 'long answer' :). it was the kind of answer i was expecting to get.
but, i don't know how to do some of the things u described there.... 1st, u said that i have to copy the HDR-texture to the back buffer. isn't there a simpler way of doing this? smth like device.backbuffer = hdr-texture? from what u wrote there, i have to create 4 vertices and render them with the hdr-texture right?
2nd: i don't know how to scale the texture (create mipmaps)
3rd: i didn't understood how to blur the texture :(

i think that's all :)

[Edited by - cyberlorddan on April 26, 2009 3:36:37 PM]
Quote:Original post by cyberlorddan

K. So, to answer to ur'....answers :).... i'm working on a rts game, and now i'm doing the map editor. the reason that i'm using mdx is because i don't know how to add butons, panels, and so on in native c++. (the game application is written in native c++. i've done the start screen and now i have to do the level editor).



Well I really don't think you want to create a version of your renderer in native DX, and a version of your renderer in MDX. That's a disaster waiting to happen.

What you CAN do is generate managed wrappers of your native C++ classes using C++/CLI. This will allow you to write your editor in C#, and use the same native rendering back-end. However I'll warn you that although it starts out somewhat simple, maintaining your wrappers can turn into a very non-trivial task. If you're not working on a bigger team, it's much more ideal to just have everything written in managed code.

Another option is to use a C++ toolkit like Qt or WxWidgets for doing the UI. Those are generally much easier to work with than the native Windows API.
1. What you generally do, is once you have all of your final images, and all that is left to do is merge, is that you get a copy of the back buffer through the device, set it as the render target, then draw a full screen quad and render all of the images that have been put into your one final one.

2 and 3 can be answered if you would take the time to look at some code from the sdk or other relivant source, for examples on how to do both take a look at this article; http://www.gamedev.net/columns/hardcore/hdrrendering/

there is also a more in depth and complex description of the hdr process based on the DirectX10 API here on the wiki.
thx for the info. i'm slowly implementing hdr to my engine (i got everything that i need to do this...thx for the links :) ).
1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :(

This topic is closed to new replies.

Advertisement