• Advertisement
Sign in to follow this  

Precomputed Radiance Transfer Question

This topic is 2306 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I am almost done perfecting my GBuffer implementation and I ran into some issues. I can't figure out how to render light coefficients to a GBuffer. The article I am trying to implement is here: http://www.cg.tuwien.ac.at/~tivolo/Bachelors%20Thesis%20-%20Precomputed%20Radiance%20Transfer.pdf Part 4.3
Any idea? I was thinking about rending a cube map, and projecting its texture onto the screen, or am I over complicating everything?
I posted this in 2 places since I need an answer fairly quick, and this seems to meet the criteria of both

Share this post


Link to post
Share on other sites
Advertisement
You're just writing out SH coefficients in RGBA, just like components in a normal vector-- granted you're going to need a few RTs for acceptable results since you need something like second-order coefficients to get results even approaching usable. This cubemap thing makes me feel like we're talking about entirely different concepts, though, so perhaps more explanation of what you're trying to store is in order.

Share this post


Link to post
Share on other sites

You're just writing out SH coefficients in RGBA, just like components in a normal vector-- granted you're going to need a few RTs for acceptable results since you need something like second-order coefficients to get results even approaching usable. This cubemap thing makes me feel like we're talking about entirely different concepts, though, so perhaps more explanation of what you're trying to store is in order.


I understand what I am rendering, but what I don't get it how. Do I create a cube map, and wrap that around the SH, or do I just pre-bake the vertices and render the textures

Share this post


Link to post
Share on other sites

[quote name='InvalidPointer' timestamp='1320296012' post='4880022']
You're just writing out SH coefficients in RGBA, just like components in a normal vector-- granted you're going to need a few RTs for acceptable results since you need something like second-order coefficients to get results even approaching usable. This cubemap thing makes me feel like we're talking about entirely different concepts, though, so perhaps more explanation of what you're trying to store is in order.


I understand what I am rendering, but what I don't get it how. Do I create a cube map, and wrap that around the SH, or do I just pre-bake the vertices and render the textures
[/quote]

All SH work is done with coefficients, look at what the DXSDK samples do. The confusion may stem from how the basis function values are precomputed and stored in a texture for later scaling in the shader; it's entirely possible to evaluate the Associated Legendre Polynomials directly via ALU and may actually be faster depending on hardware. I'm still a little lost as to what you're trying to store here, though. Is this like SH lighting calculated by way of some deferred shading whatsit? Visibility coefficients? SH lightmap?

Share this post


Link to post
Share on other sites

[quote name='UNREAL WARRl0R' timestamp='1320355759' post='4880290']
[quote name='InvalidPointer' timestamp='1320296012' post='4880022']
You're just writing out SH coefficients in RGBA, just like components in a normal vector-- granted you're going to need a few RTs for acceptable results since you need something like second-order coefficients to get results even approaching usable. This cubemap thing makes me feel like we're talking about entirely different concepts, though, so perhaps more explanation of what you're trying to store is in order.


I understand what I am rendering, but what I don't get it how. Do I create a cube map, and wrap that around the SH, or do I just pre-bake the vertices and render the textures
[/quote]

All SH work is done with coefficients, look at what the DXSDK samples do. The confusion may stem from how the basis function values are precomputed and stored in a texture for later scaling in the shader; it's entirely possible to evaluate the Associated Legendre Polynomials directly via ALU and may actually be faster depending on hardware. I'm still a little lost as to what you're trying to store here, though. Is this like SH lighting calculated by way of some deferred shading whatsit? Visibility coefficients? SH lightmap?
[/quote]

Essentially the same as what I had come up with. Thanks for the help though

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement