Jump to content
  • Advertisement
Sign in to follow this  
r691175002

Render Target Formats for HDR Deferred Shading

This topic is 3560 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I am playing around with the idea of using deferred shading with HDR on newer hardware (Say shader 3.0 nvidia 9000 series+) and was wondering what would be the best way to go about it. I was wondering how 64 bit floating point textures work out performance/quality wise on newer hardware. Unfortunately, MRT restrictions make it impossible to use the HalfVector4 format with 32 bit formats so I will either need split up the color information into two HalfVector2 textures, do a separate pass or use the logluv method of fitting it all into 32 bits. Which would be the best option? [Edited by - r691175002 on October 20, 2008 10:20:17 PM]

Share this post


Link to post
Share on other sites
Advertisement
Well you'll want something that you can additively blend no problem. LogLuv probably isn't such a great choice for that.

Why do you need MRT when you're rendering your HDR output? BTW the restriction you're talking about where all the RT's need to have the same bit depth...I believe this only applies to Nvidia 6-series and 7-series.

Share this post


Link to post
Share on other sites
MJP is right unless original poster wants to read HDR textures and store them into the GBuffer.

In that case I'd not move to a 64-bit GBuffer. LogLuv could be ok, but you should give a try to another solution: store a multiplier in the alpha channel. Alpha channel is "almost useless" in a deferred shader. By "almost useless" I mean sometimes you want to discard pixels according to the texture alpha channel. I suggest you to write 2 different shaders, one which converts HDR textures into 32bit RGBA with a multiplier in the alpha channel, and another one for non HDR textures that supports alpha test.

Share this post


Link to post
Share on other sites
All the games I know that use a G Buffer use three 8:8:8:8 render targets to store the material data and one depth buffer.
This is a good choice out of several reasons. On certain hardware three RTs in a MRT are the sweetspot, four area slower and obviously 8:8:8:8 render targets are more than double as fast as 16:16:16:16 because there are for example optimizations for alpha blending in there that the wider render targets do not have (clears are slower too etc.).
If you have a 8:8:8:8 G Buffer you can only HDR the lighting by having a higher res in the lighting buffer.
There are several tricks to achieve this. There is something called Quasi HDR where you render everything into a 8:8:8:8 render target and store a scale value in the alpha channel, then you resolve this into a 16:16:16:16 render target and PostFXs takes it from there. There is the L16uv or LogLuv model that is pretty cool. Because all this only applies to your opaque objects, you will have the challenge of finding appropriate ways to apply lots of lights to your alpha blended objects. Paying for a 16:16:16:16 render target here hurts especially .. you might think about reducing quality for those objects by going with a 8:8:8:8 render target.

Share this post


Link to post
Share on other sites
I'll take your advice and aim for 32 bit render targets. I will be rendering outdoor scenes with almost no transparency (Or where there is transparency, it will be small enough that skipping lighting for it wont be an issue). I was thinking of handling vegetation and fence type stuff with a method similar to the one outlined here ( http://www.kevinboulanger.net/grass.html ) which uses a mixture of alpha testing and blending so that artifacts are only minor around the edges which will hopefully reduce aliasing without looking ugly/murdering performance. Alpha will just write over the G buffer before lighting.

I would like to be able to use some HDR in the G Buffer such as an HDR skybox so I think using alpha for intensity will be more than enough.

I am pretty new to this 3d stuff so I also have a few more nagging questions: How do you prevent lighting areas with nothing (such as empty regions of a cleared buffer or the skybox?). Do I need to do anything with the stencil buffer for the lighting passes?

Finally, for accumulation of the lighting I assume that I should be using additive blending on a floating point surface?

After taking your advice into account, my plan is essentially:

8,8,8,8 - Color + Intensity
10,10,10,2 - Normal XYZ and nothing or a material lookup
32 - Depth
Unfortunately, I will probably have to throw in a fourth texture for lighting parameters unless there is a way to use the depth buffer in xna (PC). I also wouldn't mind storing motion x and y vectors for motion blur.

Accumulate lights in HalfVector4

Combine lighting and g buffer into a new HalfVector4 and run that through post processing.

Is this essentially how it's done?

Share this post


Link to post
Share on other sites
You might consider Blizzard's approach with Starcraft 2 where they used 4 x RGBA16 targets for the G-Buffer. This is handy for things like HDR light values, linear colors, high precision for normals, linear depth, etc. So you can store all this data in its raw form, without worrying about a packing/unpacking step.

If you'd like to look into it, here's a link to the pdf:
http://www.scribd.com/doc/4898192/Graphics-TechSpec-from-StarCraft-2

Share this post


Link to post
Share on other sites
Quote:
Original post by n00body
You might consider Blizzard's approach with Starcraft 2 where they used 4 x RGBA16 targets for the G-Buffer. This is handy for things like HDR light values, linear colors, high precision for normals, linear depth, etc. So you can store all this data in its raw form, without worrying about a packing/unpacking step.

If you'd like to look into it, here's a link to the pdf:
http://www.scribd.com/doc/4898192/Graphics-TechSpec-from-StarCraft-2


It's convenient alright, but you'll sure pay for that convenience. You're talking double the bandwidth and storage requirements. IMO it's well worth taking the time to write some code for packing and unpacking your G-Buffer.

Share this post


Link to post
Share on other sites
I agree that blizzards setup hits me as quite excessive, but it's nice to know that it is possible to use as much as 256 bits/pixel in the g buffer. I'm sure blizzard didn't take that step without making absolutely sure it was a practical solution.

Share this post


Link to post
Share on other sites
... they have fall-back pathes ... otherwise they loose most of the market out there :-) ... and Starcraft 2 did not ship so far, as far as I remember. They might change this when they are in Q&A and figure out that their target market is quite small with that setup.

Share this post


Link to post
Share on other sites
Yeah, I don't really want to spend my time coding fallbacks which is why I'm trying to limit myself to recent shader3.0+. I'm even considering just working only with 4.0 since I'm expecting it to take at least a year to finish anything worthwhile, but I'm sure XP will still be around so I'm holding back.

I don't mind aiming a little higher than the current hardware, but I don't want to do anything stupid either.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!