• Advertisement
Sign in to follow this  

Light Pre-Pass Renderer

This topic is 3622 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I made a few notes on how the Light Pre-Pass idea I am planning to use works. I posted it to my blog: http://diaryofagraphicsprogrammer.blogspot.com/2008/03/light-pre-pass-renderer.html - Wolf

Share this post


Link to post
Share on other sites
Advertisement
Thanks, interesting stuff there. I may have to swap an implementation of your algorithm in for my deferred renderer and see how things go [smile].

Share this post


Link to post
Share on other sites
In essence it is a multi-light solution that scales well on different hardware. It can be used on pretty old hardware as well. You can scale quality and performance to your needs.
I am curious what people will make out of this.

- Wolf

Share this post


Link to post
Share on other sites
I remember reading your blog about a month ago or so when you first discussed your thoughts on the idea, sounds like you've really polished up your thoughts.
Any chance of a demo in the works? I think thats RenderMonkey in the background of the screen shots.

I'm curious to how it would scale as compared to some of the examples in the IOTD section, especially for hardware which isn't Radeon HD or GF8 series upwards.

Neutrinohunter

Share this post


Link to post
Share on other sites
Damian mentioned that he might integrate it into his Light index renderer demo. This would be great. My current demo is just a RenderMonkey scene. I won't have time to implement anything more useful ...

Share this post


Link to post
Share on other sites
Just to make sure I haven't misunderstood something... if you wanted to vary the specular exponent value by the surface material (which seems like any realistic scenario will call for), you would need more data than can be stored in a single 8:8:8:8 render target. I'm thinking you would need NdotH and Attenuation*SpotFactor separated at least, plus the light color... which, even if you *don't* separate the light's spec and diffuse colors, you're easily still looking at two light buffers... correct? Or am I missing something... maybe a creative way of packing more data into the single render target? Not that this is necessarily a drawback, though it seemed like a benefit of this technique was maybe to avoid MRT... still on the whole probably fewer buffers than DR, though it all depends on implementation I suppose.

Share this post


Link to post
Share on other sites
Quote:
maybe a creative way of packing more data into the single render target?
If you want to vary the specular power value per material instead of per light, you might think about a different storage system.

Creative packing is one option then. What you want to store extra is the NL term in the pixel shader code. That is actually N.L * Attenuation. I believe you can then divide through this and go back to the original values per pixel. In other words you could re-construct the R.V^n term this way and then apply your material specific power value ... I haven't tried it.
This would also allow you to separate the light color from the diffuse term. If you want to stay with one render target you can just start packing stuff into a 16:16:16:16 instead of a 8:8:8:8 ...

Share this post


Link to post
Share on other sites
Quote:
What you want to store extra is the NL term in the pixel shader code. That is actually N.L * Attenuation. I believe you can then divide through this and go back to the original values per pixel.


It seems like this would only work for a single light. You won't be able to factor out the R.V^n term after the summation of multiple lights (at least, not in a mathematically correct way). Come to think of it, I'm not sure you could even if you separated out the R.V term into its own component, as a^n + b^n != (a + b)^n.

In other words, even though you could assume an exponent of 1 in the light pass, as you sum each new R.V term (one per light), you will lose the information of how much specularity came from each light. When you retrieve this value in the forward rendering pass, you will have a single combined R.V term. If you then apply a material-specific exponent to this value, you will get a different value than if you had applied the exponent to each individual R.V term and then summed. Perhaps it's larger by some amount that's easy to undo mathematically? You could always just scale it down/up or something, but it's at least worth noting.

So, unless I'm missing something, I'm not sure how this technique could properly handle material-specific specular power values (correctly). I hope I'm just being an idiot and missing something. :/

Share this post


Link to post
Share on other sites
I would store the N.L * ATT value per pixel and I have already stored the specular term per pixel in the light buffer. Both are blended and treated in the same way.
So for each pixel you would do (R.V^n * N.L * ATT) / (N.L * ATT). Then you apply a material exponent to R.V^n like this (R.V^n)^mn.
Let's try it and see how far we get :-) ...

The main point here is to start with the lighting equation and split it up in a meaningful way between Light properties and material properties and this way come up with new and cool renderer designs ... at some day I will try this with a Cook-Torrance Model and see how far I can get with this or a Strauss lighting model.
The worst thing that can happen is that it becomes as inefficient as a deferred renderer.

Share this post


Link to post
Share on other sites
Hey this sounds like a cool idea ... thank you :-) .. so adding an object material id while writing the normals ... you can add this to the alpha channel. This might be a specular exponent or something more complex ... lots of opportunities.

Share this post


Link to post
Share on other sites
Couldn't you add per pixel material attribute to the first pass where you write the normals and then access that buffer during the light pass.

Share this post


Link to post
Share on other sites
Quote:
Hey this sounds like a cool idea ... thank you :-) .. so adding an object material id while writing the normals ... you can add this to the alpha channel. This might be a specular exponent or something more complex ... lots of opportunities.

Yeah, I like that too... although it seems the poster's post disappeared!

Share this post


Link to post
Share on other sites
Nevermind... either I'm on crack or my browser/gamedev is being weird.

Share this post


Link to post
Share on other sites
The thing I did not really mention was:

If you can re-construct in the forward rendering pass your diffuse and specular term, you can do all kind of lighting models there.
What I mean is: if you have R.V^n as a specular term and N.L * Att as the diffuse term, you can build your usual Phong lighting model or you might just think about different models.
E.g. you do a skin shader by just adding the sub-surface scattering term. In one of our games I did it like this:

Ambient + Subsurface + Diffuse + Specular.

This is just an idea. Because you construct your lighting equation in the forward rendering pass there are lots of opportunities here.

Share this post


Link to post
Share on other sites
Quote:
The thing I did not really mention was:
If you can re-construct in the forward rendering pass your diffuse and specular term, you can do all kind of lighting models there.

Indeed, this is what is very attractive about this technique :).
Quote:
What I mean is: if you have R.V^n as a specular term...

My only problem with this is that you are presenting R.V^n as though it's a constant... but after summation through multiple lights, you are now actually dealing with something different. I.e.,

(Ra.Va)^n + (Rb.Vb)^n + ... + (Ri.Vi)^n != (Ra.Va + Rb.Vb + .. + Ri.Vi)^n

So you can't just factor out that term and now apply a material specular exponent, as it will alter the math behind the lighting. Passing the exponent in from the initial depth/normal pass seems a good work around, but then it does start to tie your material properties in with the lighting properties, making it harder to apply say a Cook-Torrance to one material and a Phong to another...

Share this post


Link to post
Share on other sites
What I would do to re-construct R.V^n is based on the current solution.

So one channel would hold R.V^n * N.L * Att and another channel would hold N.L * Att. So we would have from each light both terms. Both channels would be blended in the same way. I just added this idea in this thread.

Share this post


Link to post
Share on other sites
Cool seems interesting.

Quote:
Original post by wolf
The worst thing that can happen is that it becomes as inefficient as a deferred renderer.

... but don't kid yourself: this is a deferred renderer and there is no meaningful distinction between it and a renderer that writes out all of the BRDF parameters and reads them back in. You've simply chosen to factor Phong-like BRDFs in a slightly different way. Indeed ironically your factorization is less flexible as far as using multiple, arbitrary BRDFs than "standard" deferred rendering, so if you're comfortable with this approach then you must be pretty comfortable with deferred rendering in general whether you know it or not ;)

It is interesting to consider different specular exponents though... indeed I'm concerned that any non-linear term in the BRDF cannot be factored similarly. Thus while you're calling things the "diffuse" and "specular" parts of the BRDF, these formulations are by no means common to all BRDFs. Even Blinn and Phong BRDFs differ in their definition of the specular portion, and more complicated BRDFs can't even be decomposed like this.

That said - as it turns out - you don't really need very many BRDFs to parameterize most (if not all) of the materials in a scene reasonably :) Hence why having multiple "materials" in a deferred renderer isn't actually as much of a problem as people think that it is at first glance.

In any case there are certainly some interesting ideas here, but please do realize that all of these things (light indexed, light pre-pass, etc) are all exactly deferred renderers. They all have the same complexities (except light indexed which manages to throw unbounded storage into the mix ;)), scalings and characteristics... we're just playing with the details now.

Also don't be too blind about the various trade-offs: in the long run any "bandwidth savings" techniques that involve resubmitting and re-transforming the scene geometry (as this technique does) will probably lose to simply storing extra values in the first pass ("standard" deferred rendering). Even with perfect LOD and occlusion culling (which we don't have), the best-case bandwidth of re-rasterizing a screen full of triangles is - surprise, surprise - proportional to the frame-buffer resolution. Thus you really can't call deferred rendering "inefficient" without saying the exact same thing about both the light-indexed and light pre-pass rendering ideas.

Don't misinterpret me: I like deferred rendering and thus I think all of this new stuff is great! At the same time, however, do realize that you're just fiddling with the details of deferred rendering, so you really shouldn't trash it so much ;)

Share this post


Link to post
Share on other sites
Quote:
then you must be pretty comfortable with deferred rendering in general whether you know it or not ;)
LOL :-) ... this might be possible.

Quote:
At the same time, however, do realize that you're just fiddling with the details of deferred rendering, so you really shouldn't trash it so much ;)
Man you are hitting back hard :-) ... let's say you won this round and I will work on my forms for the next one :-) ... we could pick shadows next time :-)

Share this post


Link to post
Share on other sites
While I agree with Andy on most of his points, I have to say I that I think this a clever approach and definitely a good avenue for experimentation. I'd love to do some side by side profiling of this against more "traditional" DR techniques on various classes of hardware, but unfortunately I don't think I have the time or the hardware.

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
Man you are hitting back hard :-) ... let's say you won this round and I will work on my forms for the next one :-) ... we could pick shadows next time :-)

Haha no offense was intended; I just wanted to note that we're actually in agreement here about deferred rendering :) Regarding shadows, I'm at the point of being sick of them... once I finish my thesis I think I'll swear off them and let others continue the work. I don't think I have enough energy for another shadows discussion ;)

Quote:
Original post by MJP
While I agree with Andy on most of his points, I have to say I that I think this a clever approach and definitely a good avenue for experimentation.

Yup I agree 100%. I'm certainly interested in different ways to parameterize the problem and I think the deferred rendering "factorization" is the most promising moving forward. Thus techniques like this and light-indexed deferred rendering are very interesting to me and I'm eager to see how they work in practice!

So in summary, keep up the good work wolf et al!

Share this post


Link to post
Share on other sites
Quote:
I don't think I have enough energy for another shadows discussion ;)
This will give me a fair chance then :-) ... no I am also tired of them. I implement shadows since 2 1/2 years into games and sometimes I walk outside with the family and think about the quality of the penumbras that surround me. My last project was to make photos from shadows to document how they behave in the wild ... :-) ... so yes I am trying to get rid of it as well.

Quote:
deferred rendering "factorization"
... you do not have to use the word deferred rendering here ... it is forward rendering :-) The lighting is done in a forward rendering pass because I construct the lighting equation here. I defer only light properties, so I prefer Light Pre-Pass Renderer :-)

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
you do not have to use the word deferred rendering here ... it is forward rendering :-) The lighting is done in a forward rendering pass because I construct the lighting equation here. I defer only light properties, so I prefer Light Pre-Pass Renderer :-)

Call it what you like, but I consider anything that uses the rasterizer to solve the light contribution problem a "deferred rendering". How you factor out the BRDF/BSDF evaluation work is irrelevant to this. If you're accumulating light contributions in screen space, I consider that deferred lighting.

In any case as I mentioned it has the same theoretical complexities as more "standard" deferred rendering, so I don't see a real need to make up a new, unrelated name.

Share this post


Link to post
Share on other sites
Here`s a little idea, instead of rendering the index of lights or the LightColor*N.L*Att part of the light equation, I think it is possible to use something like Source Engine radiosity normal map, where you accumulate all light vectors and the light colors for 3 axis, and then use it in the forward rendering pass to calculate all lights in a single pass.
The great disadvantage of this idea is that now you need 3 render targets to store all light information(2 32 bits + 1 8 bits).
The advantage is maybe you could use more light models with this method.(But I not sure :p since only recently I start to look in different light models like Oren-Nayar and Cook-Torrance)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement