HDR and Multisample

Started by
5 comments, last by jpventoso 15 years, 7 months ago
Hi all :) Is there any way to enable multisample in a HDR scene? If I'm rendering the scene in a A16B16G16R16F RT texture, is there any way to do it with multisample enabled? I read somewhere that you can create a render target using IDirect3DDevice9::CreateRenderTarget(), render the scene using it and then copy its surface to a texture with StretchRect(). But I can't create an A16B16G16R16F render target... :( Thank you! P.S. There must be at least one way to do this, since the Source Engine (and many others) does it just fine :/
Advertisement
There's a small blurb there :
HDR and antialiasing at the same time.

Note that that's not how Source does it. Source simply renders to a argb8 surface using a simple "multiplicator" at the end of the shader to modulate brightness. That way they can have hdr, multisampling, and also filter after tone mapping which has some interesting properties for your eye perception on triangle edges. Of course this doesn't work for every techniques out there : blending, multipass lighting, blooming happen at lower precision and in tonemapped space (only partially linear in the non saturated ramp) and advanced tonemapping that requires more precision and an intermediate hdr format are not possible. But I guess it's enough for the kind of games they make and it satisfies their compatibility/performance goal.

I intended to do a more complete article on the subject but been a while in the making..

LeGreg
Thank you for the link, I'm gonna read it right now :)

About the Source engine, I noticed that uses some kind kind of HDR "emulation", but I remember when the Source 2007 came out (HL2 - Episode Two), to enable full HDR you needed FP16-capable hardware, so I thought they had finally implemented HDR "for real".

Thanks again and I'll post any doubts/comments!

EDIT: I finished reading. So, if the hardware accepts A16B16G16R16F as a RT, I could simply create it with the multisample type that I decide and then render the scene in it, to finally copy the result into a texture and display it using a Full-Screen Quad! Did I understand it right? Did you use this method? Do you know what percentage of FPS is lost approximately with this technique?

P.S.: Sorry for so many questions :)
Quote:Original post by jpventoso
EDIT: I finished reading. So, if the hardware accepts A16B16G16R16F as a RT, I could simply create it with the multisample type that I decide and then render the scene in it, to finally copy the result into a texture and display it using a Full-Screen Quad! Did I understand it right? Did you use this method? Do you know what percentage of FPS is lost approximately with this technique?


Yes, it's being used by a lot of games out there.

Is there a lot of FPS lost ? compared to what :) ? Beware of not comparing apples to oranges here.

There's definitely a lot going on : rendering to a format that will take more memory, will take more bandwidth to render to and texture from, may be more expensive to do multisampling on (compression and rop writing individual samples), to use alpha blending on (fast alpha blending may only be possible on 32bits per pixel), will necessitate an extra pass for tone mapping, etc. How many fps are going to be lost depends on what hardware you're running on (newer hardwares are better than older, higher end better than low end, some brand better than another brand, etc), what fps you started with, and how much each element I enumerated above are the bottleneck in your case, etc.. So your mileage may vary.

LeGreg
Sorry, my english isn't good, and sometimes I think people will understand concepts that I explained poorly :/

I was referring to the FPS comparison between these cases:

- render the scene on a FP surface with multisampling enabled and then copy the data to a FP texture (for tone mapping, blooming, etc.);
- render the scene directly on the FP texture (no multisampling).

But I really like to give your technique a try so I'll be down to work tomorrow :) Good night!

Thanks again!
You can do fp16 with MSAA on hardware that supports it, however this is limited to ATI X1000-series and above and Nvidia 8000-series and above. It's also extremely expensive for hardware that isn't high end, as it eats up a tremendous amount of bandwidth.

Valve's HDR implementation effectively gets around that, but it has some problems. Mainly that you have no way to immediately calculate the luminance for tone-mapping, and you don't have an HDR buffer available for post-processing (which means you can't do HDR bloom).

There's another alternative if you don't want to go either of those routes...you can encode HDR information in a regular R8G8B8A8 buffer using a format that works with multi-sampling and linear filtering. Personally I use the LogLuv format in my current project, and I'm very happy with the results. I wrote this blog entry about my implementation, if you're interested. This presentation here discusses a few other encoding formats available.

[Edited by - MJP on September 16, 2008 8:49:25 AM]
Thanks again MJP, I'll keep working tomorrow and I'll post any doubts :) You gave me really good advices to finally make a decision about which way to go with HDR...

This topic is closed to new replies.

Advertisement