Light Pre-Pass Renderer
Hi,
I made a few notes on how the Light Pre-Pass idea I am planning to use works. I posted it to my blog:
http://diaryofagraphicsprogrammer.blogspot.com/2008/03/light-pre-pass-renderer.html
- Wolf
Thanks, interesting stuff there. I may have to swap an implementation of your algorithm in for my deferred renderer and see how things go [smile].
In essence it is a multi-light solution that scales well on different hardware. It can be used on pretty old hardware as well. You can scale quality and performance to your needs.
I am curious what people will make out of this.
- Wolf
I am curious what people will make out of this.
- Wolf
I remember reading your blog about a month ago or so when you first discussed your thoughts on the idea, sounds like you've really polished up your thoughts.
Any chance of a demo in the works? I think thats RenderMonkey in the background of the screen shots.
I'm curious to how it would scale as compared to some of the examples in the IOTD section, especially for hardware which isn't Radeon HD or GF8 series upwards.
Neutrinohunter
Any chance of a demo in the works? I think thats RenderMonkey in the background of the screen shots.
I'm curious to how it would scale as compared to some of the examples in the IOTD section, especially for hardware which isn't Radeon HD or GF8 series upwards.
Neutrinohunter
Damian mentioned that he might integrate it into his Light index renderer demo. This would be great. My current demo is just a RenderMonkey scene. I won't have time to implement anything more useful ...
Just to make sure I haven't misunderstood something... if you wanted to vary the specular exponent value by the surface material (which seems like any realistic scenario will call for), you would need more data than can be stored in a single 8:8:8:8 render target. I'm thinking you would need NdotH and Attenuation*SpotFactor separated at least, plus the light color... which, even if you *don't* separate the light's spec and diffuse colors, you're easily still looking at two light buffers... correct? Or am I missing something... maybe a creative way of packing more data into the single render target? Not that this is necessarily a drawback, though it seemed like a benefit of this technique was maybe to avoid MRT... still on the whole probably fewer buffers than DR, though it all depends on implementation I suppose.
Quote:maybe a creative way of packing more data into the single render target?If you want to vary the specular power value per material instead of per light, you might think about a different storage system.
Creative packing is one option then. What you want to store extra is the NL term in the pixel shader code. That is actually N.L * Attenuation. I believe you can then divide through this and go back to the original values per pixel. In other words you could re-construct the R.V^n term this way and then apply your material specific power value ... I haven't tried it.
This would also allow you to separate the light color from the diffuse term. If you want to stay with one render target you can just start packing stuff into a 16:16:16:16 instead of a 8:8:8:8 ...
Quote:What you want to store extra is the NL term in the pixel shader code. That is actually N.L * Attenuation. I believe you can then divide through this and go back to the original values per pixel.
It seems like this would only work for a single light. You won't be able to factor out the R.V^n term after the summation of multiple lights (at least, not in a mathematically correct way). Come to think of it, I'm not sure you could even if you separated out the R.V term into its own component, as a^n + b^n != (a + b)^n.
In other words, even though you could assume an exponent of 1 in the light pass, as you sum each new R.V term (one per light), you will lose the information of how much specularity came from each light. When you retrieve this value in the forward rendering pass, you will have a single combined R.V term. If you then apply a material-specific exponent to this value, you will get a different value than if you had applied the exponent to each individual R.V term and then summed. Perhaps it's larger by some amount that's easy to undo mathematically? You could always just scale it down/up or something, but it's at least worth noting.
So, unless I'm missing something, I'm not sure how this technique could properly handle material-specific specular power values (correctly). I hope I'm just being an idiot and missing something. :/
I would store the N.L * ATT value per pixel and I have already stored the specular term per pixel in the light buffer. Both are blended and treated in the same way.
So for each pixel you would do (R.V^n * N.L * ATT) / (N.L * ATT). Then you apply a material exponent to R.V^n like this (R.V^n)^mn.
Let's try it and see how far we get :-) ...
The main point here is to start with the lighting equation and split it up in a meaningful way between Light properties and material properties and this way come up with new and cool renderer designs ... at some day I will try this with a Cook-Torrance Model and see how far I can get with this or a Strauss lighting model.
The worst thing that can happen is that it becomes as inefficient as a deferred renderer.
So for each pixel you would do (R.V^n * N.L * ATT) / (N.L * ATT). Then you apply a material exponent to R.V^n like this (R.V^n)^mn.
Let's try it and see how far we get :-) ...
The main point here is to start with the lighting equation and split it up in a meaningful way between Light properties and material properties and this way come up with new and cool renderer designs ... at some day I will try this with a Cook-Torrance Model and see how far I can get with this or a Strauss lighting model.
The worst thing that can happen is that it becomes as inefficient as a deferred renderer.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement