Hi,
I just included some HDR rendering (tonemapping, bloom) in my game using the nvidia sdk 9.52 example "HDR deferred shading". While it works as expected and looks really good under given situations, I wondered how to use these techniques in a real game? Well I can think about a good practice in any indoor-game, which is nice because my game will partially play inside of buildings with huge variance of darkness and light. However most of the game will play in an outdoor-environment. The HDR I use (don't know if this is generally how HDR works) gives good results when there is a wide range of brightness in the scene. However in an outdoor environment this is not always given. For example, I want my skydome to have a bloom effect. As long as I'm looking at my environment, this is working (sky has twice the diffuse lighting amount):
It looks quite nice, given that its just a basic effect and the graphics are rather crappy. However, as soon as I look directly into the sky, the bloom completly disappears.
So now I'm a bit confused. I know that it is correct that the blooming decreased (I'm using a system for HDR thats taking into consideration the last frame's tonemap), but I at least expected it to stay at least at a certain level, brigther than normal. I want to achieve effects like shown on that site:
http://www.boards.ie/vbulletin/showthread.php?p=63669605
Now my first question: Is HDR the right think to do such blooming? I played with the values of light & tonemap and achieved different effects, but not what I expected. Can I use my existing tonemapping to simulate such an effect, or should I use HDR only to simulate color values over 1.0f and add such a bloom as an additional post-process?
Second question: I noticed I had to use really high light values to get any noticable bloom/hdr effect. Like 16.0f+ for diffuse component. Am I really supposed to use such brigh light sources? For example, an ambient value of 2.0f gives such dark scenes:
Is this correct? I mean, I don't mind having to set all my light sources to higher values, but I can't stop thinking that there is something wrong. Is HDR supposed to have really brigh light sources, or is there maybe something wrong in my code?
Third and finally, I noticed an ugly effect when using not that bright light sources:
As you can see here, brighter colors from the texture stay bright, while darker values are getting really dark. It looks unappealing, and only disappears when I set the light sources to some high values like 32.0f+. Once again, my question is: Is this an effect I have to expect when using tonemapping/hdr? Can I do something against it, or do I have to use very bright light sources?
Could it be that I have a general mistunderstanding about what HDR/tonemapping is used for? I didn't found many sources on the net, just one on msdn explaining the tonemapping calculations. Can somebody explain to me? Thanks in advance!
HDR: practical use in a game
You sound a bit confused as to what HDR is. HDR is when you've got data that exceeds the range of your sensors. In games, that means that we're displaying a picture on an 8-bit display, but internally we represent the picture with more than 8-bits of precision.
Blooming, tone-mapping and dynamic adaptation are all separate concepts.
If you're making a renderer that's just supposed to look okay, then pick any numbers you want that happen to give a decent result. Otherwise, consider what "units" your light values are being measured in -- i.e. 1.0 units of what?
Blooming, tone-mapping and dynamic adaptation are all separate concepts.
Now my first question: Is HDR the right think to do such blooming? I played with the values of light & tonemap and achieved different effects, but not what I expected. Can I use my existing tonemapping to simulate such an effect, or should I use HDR only to simulate color values over 1.0f and add such a bloom as an additional post-process?What's so special about 1.0f? Why is "1" unit of light considered "bright"?
If you're making a renderer that's just supposed to look okay, then pick any numbers you want that happen to give a decent result. Otherwise, consider what "units" your light values are being measured in -- i.e. 1.0 units of what?
Second question: I noticed I had to use really high light values to get any noticable bloom/hdr effect. Like 16.0f+ for diffuse component. Am I really supposed to use such brigh light sources?
Is this correct? I mean, I don't mind having to set all my light sources to higher values, but I can't stop thinking that there is something wrong. Is HDR supposed to have really brigh light sources, or is there maybe something wrong in my code?[/quote]The sun appears about 10,000 times brighter than a light-bulb... so, if your lighting is supposed to be based on real physics, yes.
Could it be that I have a general mistunderstanding about what HDR/tonemapping is used for? I didn't found many sources on the net, just one on msdn explaining the tonemapping calculations. Can somebody explain to me? Thanks in advance! [/quote]Tonemapping converts linear brightness values into something approximating human vision.
To the human eye, a 200W light-bulb doesn't look twice as bright as a 100W light-bulb -- we respond in curves, not straight lines. Also, we can look at two objects that are 100,000 times different in brightness, and see them both clearly at the same time. Tonemapping is supposed to take these huge range of brightnesses and convert them into an appealing picture.
There's a LOT of ways to do tone-mapping (photographers have been researching HDR/tone-mapping LONG before game programmers become interested in it), and none of them are "correct". Many will produce appealing images sometimes and dull images other times.
Here's a link that shows a few different types of tone-mappers:
http://filmicgames.com/archives/75
Exposure can be tricky to get right with HDR. A lot of people just implement the basic reinhard "log-average of luminance" approach and call it a day, but it's not always going to give you good results. At the very least you will probably need to tweak the key value for different scenes to make the result subjectively pleasing...for instance in your case you might want to adjust it so that the sky doesn't just completely darken when you look at it. And in some cases you might better results out of a completely different auto-exposure algorithm, or even just using manual exposure.
You sound a bit confused as to what HDR is. HDR is when you've got data that exceeds the range of your sensors. In games, that means that we're displaying a picture on an 8-bit display, but internally we represent the picture with more than 8-bits of precision.
You are totally right, I was unaware what HDR really does. I somehow thought that the blooming or overbrigthness-effects seen in many games like oblivion IV was actually HDR. I read up some papers about HDR rendering, and know I know better what it is used for.
What's so special about 1.0f? Why is "1" unit of light considered "bright"?
I did consider 1.0f as bright, but I actually ment it to be "brigth" in context of an ambient lighting, as 1.0f ambient would mean the textures colors are 1:1 on the screen, at least in linear space. I was a bit confused that I am unable to create colors as bright as the original textures with HDR (at least not without light values that makes the adaption last for about 10 seconds or so), so I thought something is going wrong.
If you're making a renderer that's just supposed to look okay, then pick any numbers you want that happen to give a decent result. Otherwise, consider what "units" your light values are being measured in -- i.e. 1.0 units of what?
Well the main problem is that I havn't got any really pleasant results so far. This is due to 2 issues. I already mentioned the first one, these ugly artifacts where a bright area is directly near a much darker area. I'm afraid I don't know the word for this effect in english, but I read on the german wikipedia-article about HDR-rendering that this happens mostly due to low precision storage of light values. However I made sure that all of my render-targets for storing lighting results are in the A16B16G16R16F-format, so this shouldn't be a main reason. Any ideas here?
Another thing that perplexes me is the behavior of the tone-mapping itself. In the nvida sdk demo, it works just fine: If I walk inside the house with the dark room and look outside of it, the environment lightened by the sun looks really bright. As soon as I walk outside, the adaption makes it go "normal" again. Well however in my application, the adaption just does whatever it wants, even if there is only a small dark area on the screen, everything goes very bright. I noticed this effect is the strongest when something dark is in the middle of the screen. I made a small video to show you (same settings as the nvidia sdk 9.52 hdr deferred shading demo: 12.8f directional light value) Note: graphics where NOT made by me, but ripper of the game "cursed mountain" from deep-silver for testing purposes only (I really suck at creating 3d graphics and lack any 3d artists).
[media]
[/media]
First, let me sum up what HDRs main benefits are for 3d rendering to see if I get this straight: You can have very bright areas and very dark areas in the same image, and see both of them in great detail. Is this correct? If yes, then it seems like there seems to be something wrong with my HDR render, as 90% of the time I get only white and black colors. The dark side of the cube seems to totally mess up the whole adaption/tonemap-thing (look how long it takes for the adaption to make the sky go back to "normal" colors again after the black side of the cube has been in the middle of the screen for some time). Also, you should clearly see the color-artifacts, as the tops of the mountains like glow white while the other parts are in a very high color contrast to them (like 0:24 in the top left corner).
Now what do you say? Is this somewhat an effect you have to expect using the basic log-average algorythm with bloom and adaption? Or is there something serious wrong? Ok, I know, it is unlikely to happen that there will ever be a completely black cube in the middel of the screen in my game. But it confuses me that these approximately 2,5-5% of the screen have such a huge impact (not how when I zoom in and the cube moves to the side the scene goes way darker, even though now there is even more space covered by black). It might also be an effect of the coloring artifact, at least if I look down at the mountains it should give me better results without these artifact. Can it also be a fault of the textures used? They aren't really that detailed anyway, and I'm pretty sure they weren't designed with HDR in the mind. Though I don't really think that textures have to be designed for HDR, I can't really know..
So, any comments/ideas on this? Playing around with the lighting values somehow gives partially results that show me how great HDR could look, without these flaw. Can somebody help me solve these issues?
How do I compensate for gamma? I already heard that you should take care about gamma values but I don't really know how to..? And what do you exactly mean by "do your light calculations in linear space"? I didn't change anything on my linear lighting model when implementing HDR in my engine, so I don't think there is anything I should change. Or is there?
[quote ='MJP']Exposure can be tricky to get right with HDR. A lot of people just implement the basic reinhard "log-average of luminance" approach and call it a day, but it's not always going to give you good results. At the very least you will probably need to tweak the key value for different scenes to make the result subjectively pleasing...for instance in your case you might want to adjust it so that the sky doesn't just completely darken when you look at it. And in some cases you might better results out of a completely different auto-exposure algorithm, or even just using manual exposure.
The problem with adjusting the key value is that I don't have any control about how different settings in one scene are affected. For example, I can't really differ between the sky and the environment in the HDR rendering pass. So if I increase the key value then everything gets brighther, not only special parts. Also, the adaption will take much longer with higher key values (at least in my implementation). I can fix that, though.. However, increasing the key value also increases the "peak-brigthness" and not only the overall level of brightness. So is there any way I could just increase the basic level of brightness for tonemapping rendering?
Using HDR, by itself, is exactly the same as not using it. If you've got fake constant (non-attenuated) ambient lighting of 1.0 (and no other lighting), then an 8-bit render and a HDR render will look identical.
I did consider 1.0f as bright, but I actually ment it to be "brigth" in context of an ambient lighting, as 1.0f ambient would mean the textures colors are 1:1 on the screen, at least in linear space. I was a bit confused that I am unable to create colors as bright as the original textures with HDR (at least not without light values that makes the adaption last for about 10 seconds or so), so I thought something is going wrong.
The problem you're talking about here is with your adaptive tone-mapper. You can use a HDR renderer without using a tone-mapper, and you can use a tone-mapper without it being self adaptive.
Another thing that perplexes me is the behavior of the tone-mapping itself. In the nvida sdk demo, it works just fine: If I walk inside the house with the dark room and look outside of it, the environment lightened by the sun looks really bright. As soon as I walk outside, the adaption makes it go "normal" again. Well however in my application, the adaption just does whatever it wants, even if there is only a small dark area on the screen, everything goes very bright.[/quote]How does your adaptive algorithm work? Are there values that you can output for debugging? (e.g. if it works off of average luminance, can you output that value to see if it makese sense?)First, let me sum up what HDRs main benefits are for 3d rendering to see if I get this straight: You can have very bright areas and very dark areas in the same image, and see both of them in great detail. Is this correct?[/quote]No. That's a responsibility of your tone-mapper, not HDR. Also, only very few tone-mappers can deal with having very bright and very dark images in the same image and preserve the details in both -- the algorithms that do cope with this are generally more complex and more computationally expensive.Can it also be a fault of the textures used? They aren't really that detailed anyway, and I'm pretty sure they weren't designed with HDR in the mind. Though I don't really think that textures have to be designed for HDR, I can't really know..[/quote]No. Albedo textures describe what percentage of light should be absorbed/reflected. Because this is a percentage, it makes sense to store it in a factional 0-to-1 value. This is the same in HDR and non-HDR renderers.
This also demonstrates why 0 and 1 have a specific meaning with albedo textures (full absorption and full reflectance). However, when it comes to light values, the number 1.0 is just as arbitrary and meaningless as 13.7 or 0.4 or 3.5325. There's nothing special about 1.0f when it comes to lighting.So, any comments/ideas on this? Playing around with the lighting values somehow gives partially results that show me how great HDR could look, without these flaw. Can somebody help me solve these issues?[/quote]My advice would be to simply not use an adaptive tone-mapper. Pick a robust algorithm that gives good results, and then tweak it by hand.
Using HDR, by itself, is exactly the same as not using it. If you've got fake constant (non-attenuated) ambient lighting of 1.0 (and no other lighting), then an 8-bit render and a HDR render will look identical.[/quote]
Ok, seems logical so far.
How does your adaptive algorithm work? Are there values that you can output for debugging? (e.g. if it works off of average luminance, can you output that value to see if it makese sense?)[/quote]
Yes, it works on average luminance. However the output makes not much sense, because as soon as I turn the black side of the cube to the camera, the luminance goes from >= 1.0f to something < 0.1f. It seeems like the average luminance calculated only takes into consideration the mid section of the screen.
No. That's a responsibility of your tone-mapper, not HDR. Also, only very few tone-mappers can deal with having very bright and very dark images in the same image and preserve the details in both -- the algorithms that do cope with this are generally more complex and more computationally expensive.[/quote]
Well, this mean that most information on the net about HDR is wrong, like this:
ftp://download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_HDR.pdf
On page 3 it says exactly what I asked if it is true for HDR rendering. Does these people have no idea what they are talking about or do they mix up tonemapping and HDR?
My advice would be to simply not use an adaptive tone-mapper. Pick a robust algorithm that gives good results, and then tweak it by hand.
[/quote]
I followed your advice and searched for a different algorythm. After a long search I eventually found some interesting paper about the theoretical background of the algorythms used for HDR/tonemapping. I eventually found a mistake caused by copying the code for my first tonemapping-algorythm and solved it, but the results looked unpleasing:
Here is the original without hdr or tonemapping:
Though there is a lot more detail in the tonemapped image it doesn't look eye-pleasing at all.There is no real contrast anyway, everything look like it has the same brightness. So thats when I said I need a new algorythm, and looked back at the paper I found. I found the "modified reinhard"-algorythm most interesting, and implemented it. It looks awesome:
Sure, the light I used is a lot brighther, but overall, it looks way better, in my opinion. Ignore these flame-billboards around the screen, they are out of place. But other than that, what would you say? The overall detail is, from what I can tell, really good, for bright as for darker areas (ah yeah I use bloom too so thats why some parts are totally white). I made a lot more screenshots and compared both lighting models, and my new tonemapper appears to be much more pleasant to the eye. I also can modifiy my new algorythm more easily and adjust the final output:
float4 psToneMap(PS_INPUT_LIGHT i) : COLOR
{
float lum = tex2D(LuminanceSampler, i.vTex0).x;
float3 blur = tex2D(BlurSampler, i.vTex0);
float avg_loglum = exp(tex2D(AvgLuminanceSampler, float2(0.5, 0.5)).y)*5;
float3 lightTransport = tex2D(ImageSampler, i.vTex0).xyz;
float key = max(0, 1.5f-(1.5f/(avg_loglum*0.1f+1.0f)));
lightTransport += blur;
float lum_scaled = (key*lum)/avg_loglum;
float3 finalColor = lightTransport*((lum_scaled*(1+(lum_scaled/0.5f)))/(1 + lum_scaled));
return float4(finalColor, 1.0f);
}
Thats the algorythm I'm using right now. It is still adapted, however I can easily scale the amount of adaption, so I won't get such strange artifacts anymore. Thats the algorythm used for the first picture, therefor the one the nvidia sdk example uses:
float4 psToneMap(PS_INPUT_LIGHT i) : COLOR
{
float lum = tex2D(LuminanceSampler, i.vTex0).x;
float3 blur = tex2D(BlurSampler, i.vTex0);
float avg_loglum = exp(tex2D(AvgLuminanceSampler, float2(0.5, 0.5)).y);
float3 lightTransport = tex2D(ImageSampler, i.vTex0).xyz;
lightTransport += blur;
float lum_temp = (1.0f / (avg_loglum + 0.001)) * lum;
float ld = lum_temp / (lum_temp + 1);
float3 finalColor = (lightTransport / lum) * ld;
return float4(finalColor, 1.0f);
}
Well, what do you say? Do you think that the new tonemappers looks as good as I do? Here is one final comparison screenshot, with a little less bright light:
Left side without HDR/tonemapping, right side with HDR/tonemapping. What do you say? Any comments/feedback would be really appreciated.
No. That's a responsibility of your tone-mapper, not HDR. Also, only very few tone-mappers can deal with having very bright and very dark images in the same image and preserve the details in both -- the algorithms that do cope with this are generally more complex and more computationally expensive.Well, this mean that most information on the net about HDR is wrong, like this: ftp://download.nvidi...Leagues_HDR.pdf
On page 3 it says exactly what I asked if it is true for HDR rendering. Does these people have no idea what they are talking about or do they mix up tonemapping and HDR?[/quote]Note that it's underneath the heading "In lay terms" - meaning they're trying to get the general point across without getting into technical details.
In lay terms, I'd say the same - that the magic buzzword "HDR" is used to achieve all of this. I might even say that "HDR" is responsible for the realistic bloom implementation when being overly general to quickly get the point across.
Also, while they say things can be really bright/dark and still been seen in detail, they don't necessarily say that you can see details in really bright and really dark things in the same image at the same time. Depending on the tone-mapping algorithm used to convert from HDR to 8-bit, the truth of the "at the same time" part will vary.Yes, it works on average luminance. However the output makes not much sense, because as soon as I turn the black side of the cube to the camera, the luminance goes from >= 1.0f to something < 0.1f. It seeems like the average luminance calculated only takes into consideration the mid section of the screen.[/quote]How are you getting the average luminance? Are you using the "repeatedly downscale the rendertarget by half until you've got a 1x1 version" technique? To debug this, you can view all the downsample steps in PIX, or render them to the screen as 2D overlays to ensure it's working correctly.
Note that it's underneath the heading "In lay terms" - meaning they're trying to get the general point across without getting into technical details.
In lay terms, I'd say the same - that the magic buzzword "HDR" is used to achieve all of this. I might even say that "HDR" is responsible for the realistic bloom implementation when being overly general to quickly get the point across. [/quote]
Oh, I didn't read "in lay terms". As the wikipedia-article mentioned the same statement in their general description it took it for given. Ok, now I know that isn't right.
Also, while they say things can be really bright/dark and still been seen in detail, they don't necessarily say that you can see details in really bright and really dark things in the same image at the same time. Depending on the tone-mapping algorithm used to convert from HDR to 8-bit, the truth of the "at the same time" part will vary.[/quote]
Ah, another thing I misunderstood. I combined the both statements into one. I also had your statement in mind:
Also, we can look at two objects that are 100,000 times different in brightness, and see them both clearly at the same time.[/quote]
So this is true for our eye, and would be an aim for HDR rendering, but isn't always true. Correct?
How are you getting the average luminance? Are you using the "repeatedly downscale the rendertarget by half until you've got a 1x1 version" technique? To debug this, you can view all the downsample steps in PIX, or render them to the screen as 2D overlays to ensure it's working correctly. [/quote]
Yes, I am using the downscale-technique you mentioned. As I can't use PIX (that old crashing-thing), I tried the NvidiaPerfHUD, but it gives me some weird output on this part of rendering. The used textures and the rendertargets seem to be set at random, for example first there is a 960x600 rendertarget set, then in the next step it uses the original 1920x1200 scene, afterwards it jumps to the 480x300 rendertarget, and so on. However, the final avg-luminance seemed to be correct, and from all what I was able to figure out from the strange output of the PerfHud, I found the actual problem: The sampler state used for downsampling used a linear filter method. I tried setting the filter to point, and quess what: It works perfectly now. Seems like bilinear filtering isn't working for getting the average luminance. Any reason why the nvidia sdk had it set?
So, now that this is working, I still have one doubt about my algorythm.
float4 psToneMap(PS_INPUT_LIGHT i) : COLOR
{
float lum = tex2D(LuminanceSampler, i.vTex0).x;
float3 blur = tex2D(BlurSampler, i.vTex0);
float avg_loglum = exp(tex2D(AvgLuminanceSampler, float2(0.5, 0.5)).y)*5; //grab the first texel
float3 lightTransport = tex2D(ImageSampler, i.vTex0).xyz;
float key = max(0, 1.5f-(1.5f/(avg_loglum*1.0f+1.0f)));
lightTransport += blur;
float lum_scaled = (key*(lum))/avg_loglum;
float3 finalColor = lightTransport*((lum_scaled*(1+(lum_scaled/white)))/(1 + lum_scaled));
return float4(finalColor, 1.0f);
}
This is the final version of my tonemapping algorythm. So my concern is now: With what am I supposed to multiply the compresed and scaled luminance values? Am I supposed to multiply it with the luminance-lighted scene, or with the pure texture colors of the objects in the scene? With the first one, I get the (subjectivly) more eye-pleasing picture. However there are a lot of values (especially for high brightness of the lights) that exceed the range of 0.0f-1.0f in the final scene. The compressed lumiance however is correct, none of the values go anywhere above 1.0f, so multiplying it with the texture colors gives values above 1.0f only for textures colors like above 0.75f. However, I can't really get this solution to make the image go brigther by choosing a brigther light color (100.0f gives the same result as 10000.0f after adaption). Multiplying it with the lighted scene gives values over 1.0f, however the light has a more direct inpact. Which one of these two solutions is right? I just noticed that no paper about HDR tells you what to do with the compressed luminance values. Is it that straight-forward? I really can't decide which one of these two ways is the right one to go.I made a little .gif for comparing: first picture is the compressed luminance, sencond is the luminance multiplied with the lighted scene, third is the luminance multiplied with the texture colors only.
What you've got here is a failure to debug. I think it's likely you don't know enough about what you're implementing to properly debug the system. If you don't know what values represent, there's no way you can analyze them for consistency and accuracy.
I suggest you go through every line of your code and don't go on until you know exactly what that line is doing and why. Once you do that, you'll be able to figure out exactly why you're not getting the results you expect.
I suggest you go through every line of your code and don't go on until you know exactly what that line is doing and why. Once you do that, you'll be able to figure out exactly why you're not getting the results you expect.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement