In particular I've written a fairly optimized forward renderer that generates shader permutations on the fly (from light and surface shaders), selecting and sorting based on BHV intersections: Old Screenshot (you get the idea).
Quote:Original post by Woodchuck
I'm talking about a Blinn shader that do NdotL, pow( NdotH, gloss), normal mapping and often paralax mapping. Not an old lightmap or just NdotL shader.
What made you think that I was talking about anything else? I even consider that to be a pretty basic lighting system myself, but I could be out of touch ;)
Quote:Original post by Woodchuck
If you want this, just reduce your geometry polygon count.
You shouldn't have to reduce your polygon count... there's no reason why lighting and geometric complexity have to be related by O(G*L). Deferred rendering provides a elegant solution to that!
Quote:Original post by Woodchuck
Normal mapping and parallax are made to replace polygons no?
Not entirely, no. They're useful for cases where the lighting model and some basic parallax are the only discernible effect of complex geometry, but useless in other cases that actually need to model proper occlusion.
These sorts of techniques help us get back to a system wherein rasterization is a win (i.e. polygons larger than a pixel), but they do *not* solve the global scene complexity problem - far from it. Note that polygon counts are *still* increasing even in new games that make use of these sorts of shaders.
Quote:Original post by Woodchuck
In other term, I think there is less 'overlaped' pixel than pixels that are shaded for nothing for a given light.
I still don't know what you're saying, or really how it relates to the conversation, sorry :(
If you're talking about shading pixels that will eventually be occluded, that's a total non-issue in any of these techniques. Using occlusion queries, a pre-Z pass, and/or deferred rendering it is easy to solve this problem. What I'm talking about is the granularity of "light contribution" calculations. It simply cannot be tight enough when done on the CPU.
Imagine a large triangle that fills the whole screen but yet only a small piece of it is affected by a light (these are the sorts of cases that you're suggesting will become common with normal mapping, etc). There's no way to avoid wasting a lot of power computing lighting for the whole triangle per-pixel, when only a small portion of it will actually get lit. Deferred rendering solves this.
Note that you can't even compute which *triangles* might be affected by a light since that's too expensive. The best one can reasonably do is use object bounding volumes, which is an extremely course way to go, and *still* eats CPU power. Deferred shading solves this per-pixel and is cheap...
Quote:Original post by Woodchuck
Mmmh, for a draw call, you have a static number of settexture and other things like that. That's why draw calls only considerations are better than just optimize one or two set textures some times between two draw calls;)
Don't know what you're trying to say here... all I'm saying is that *any* state changes breaks batching. So therefore having a different shader for every object (or even triangle) in your scene to handle lots of different lighting situations will ruin all batching.
Quote:Original post by Schrompf
[edit]I see, there's a conflict. To be more precise: vertex processing or rasterization were never a problem *for us*. NVPerfHUD showed that pixel processing is the limiting factor.[/edit]
Great - although I'm sure you can see how they could easily become so for a more complex scene using a single light per pass (as I was discussing).
This is one reason why I think that "multiple lights per pass" and deferred rendering are really the only two reasonable options moving forward. As hardware becomes more capable (and it's already quite usable) and scenes become more complex, I suspect that deferred rendering will continue to look better and better in comparison.