Sign in to follow this  
B_old

Performace of MRT rendering

Recommended Posts

Hello, in an application that uses deferred shading I want to add a self-glowing property to my materials. One way to do it is to add a emissive value in the g-buffer and use that later. Than it occurred to me, that like ambient lighting I only want to apply this once. So instead of rendering to the g-buffer I could directly render the emissive color to the final rendertarget during g-buffer creating and later blend all the lights on top of that. So basically I render to one more render target that I already have. On my system the performance impact is very small. My actual question now is, whether performance could be noticeably worse on other systems when I render to yet another rendertarget instead of sacrificing one channel in the g-buffer. Memory consumption should be about the same, with the method that renders to the final rendertarget along with the g-buffer using potentially less. But with the emissive value in the g-buffer less memory "is used at the same time" maybe. Hm. I hope I could get my question across. EDIT: Maybe I can summarize it: Should I render out ambient and emissive values as soon as possible - during g-buffer creation? I have to bind another rendertarget (that I already have) but retain a slimmer g-buffer. EDIT: I have no decided to go with having the emissive in the g-buffer. It seems fastest after all. [Edited by - B_old on August 17, 2009 6:45:28 AM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this