Graphical Conundrum

Started by
8 comments, last by bluntman 12 years, 10 months ago
Ok so let me explain my problem and thank you to every and anyone who even reads all of this, much less has a response good or bad. Any feedback at all is greatly appreciated because I'm at like a roadblock here.

My problem is that I've been tasked with introducing more lighting into a game I'm a team member of. We currently have one light source and it's ok but we really want to give the game a brand new feel and we feel as though this is it. The scene is fairly small and never changing ( it's a tower defense so think in terms of that ). I'm limited by two things, I can only have a memory footprint of some 250mb (which we're already over and trying to cut down) and I'm only allowed to use vertex and pixel shader models 2.0, nothing more nothing less. When I try to implement more than just the one light source with a single pass I'm restricted to just two lights before I run into the 64 operations rule. Then I make it multipass and run into the 31 constants rule when I try to add more than I think it was 5 lights. I looked into a deferred rendering system but the problems with that is the memory footprint would be too high, transparency would be annoying and I have a lot of particles who are really dependent on them for blending and the like, and I just haven't done a system like that before and only have about 2 weeks to get this in so I'm not even sure if it's within my time limit.

I'm trying to get like 10 lights in the game but it's proving to be a problem. I don't doubt I can do whatever system is needed (it just takes some research and hard work which I'm good at) I just don't know what the best solution is here. Again I'd love any and all feedback you guys might have. Another idea was to do multiple draw calls to basically clear the rule caps but idk how I feel about that, seems so taboo and hacky but meh, I need something lol.
Advertisement
Could you bake your main lightsource into the models, then use your dynamic lights to illuminate sections of the maps? Split the maps into portions and render each portion with the 2 "most important" lights for that portion (no-one will notice the lack of the less important lights).
If there was no instruction limit, would you really be able to afford to draw 10 lights on SM 2 hardware anyway?

(I'm not saying that you can't, it just strikes me as a lot of lights for hardware that is that old, in a forward rendered setup.)
Some older games would convert distant light sources to spherical harmonics, then evaluate that in the shader to get the diffuse contribution. So basically as your character (or other dynamic objects) move through the world, you would have some small number of "primary lights" that you would pick out by proximity and light those directly in the shader. Then for distant light sources, you would encode to SH. Then as the character moves, you would fade out one or more of the primary lights as the character moves away from it and simultaneously start blending it into the SH.
Is it just me or there is a serious problem with
250+ MiB of data and SM 2.0 cards? Perhaps 250MiB total?

If the scene is static and the lighting is mostly static, I'd consider doing lighting per vertex on CPU... but if already over budget this is not an option.
I think your priority must be clean up the memory. If you don't have instructions, you don't have horsepower nor memory, no trade off can be done.
Can you estimate the amount of vertices to be lit? Is the meshing dense enough?

Perhaps multipass lighting could do. For some simple lighting model (accumulate lights, multiply diffuse) it can be viable. Without knowing your assets it's difficult to tell but this might possibly be the most promising solution.

Previously "Krohm"


Is it just me or there is a serious problem with
250+ MiB of data and SM 2.0 cards? Perhaps 250MiB total?

If the scene is static and the lighting is mostly static, I'd consider doing lighting per vertex on CPU... but if already over budget this is not an option.
I think your priority must be clean up the memory. If you don't have instructions, you don't have horsepower nor memory, no trade off can be done.
Can you estimate the amount of vertices to be lit? Is the meshing dense enough?

Perhaps multipass lighting could do. For some simple lighting model (accumulate lights, multiply diffuse) it can be viable. Without knowing your assets it's difficult to tell but this might possibly be the most promising solution.


This is a long shot, but about a low resolution light pre-pass? Render scene depth to a low res target (256x256?), render lights as you would in a deferred light pass, but simply accumulating light, not material response, then use light texture as input to your main render, UVs in screen space. Hopefully you can reconstruct frag position and accumulate light in 64 instructions?
Our artist made super high poly models and our designer decided to put a ton of them on screen at once. So I'm kind of stuck in the middle of two team members here and neither will budge but it is the others' fault. I think what is my best option after having read your guys' suggestions and consulting some other people is to bake as many static lights as I can into the scene (by I, I mean my artist) and then really figure out which dynamic lights are the most important. The suggestion MJP put out is something I had been thinking of where basically since it is entirely possible that there may only be two to three lights visible at any given moment while some five or six in the scene in total, I can basically cull to determine which lights currently need to be displayed. I then let those lights pass their parameters into my shaders and all will be good. Only problem here is that it's possible for lights to pop in and out as they're going to span a bit on the field so fading in and out like again MJP suggested might solve that problem.

Thanks for all the responses they're all super helpful. If you guys have any more suggestions about my plans I'd love to hear them.
This is a long shot, but about a low resolution light pre-pass? Render scene depth to a low res target (256x256?), render lights as you would in a deferred light pass, but simply accumulating light, not material response, then use light texture as input to your main render, UVs in screen space. Hopefully you can reconstruct frag position and accumulate light in 64 instructions?


It sounds like you want him to use a lower resolution light buffer than the G buffer. For this you need inferred rendering (which would also provide him with some flexibility in the way of transparency).

However, with a 2 week timeline, there's now way he'll be able to provide a complete solution in time, using that method


(Also scene depth in a deferred setup would not go into a square buffer unless your final output was also square, btw.)
This might not be the best method, but should be very doable in a few minutes. Note that lighting won't be 100% correct in some/most cases... you might get away just fine though.
Basically you calculate your light direction vectors in the vertexshader (like you normally do), but instead of using multiple texcoords for multiple lights, you store all data in one texcoord.

Behold the mighty hybrid per pixel lighting code (...not):

Vertexshader:

// "hybrid" per pixel lighting
for (int i=0; i<NUM_LIGHTS; i++)
{
float3 Ln;
half att;
Ln = (vecLightPos.xyz - worldPos.xyz);
att = (1-saturate(length(Ln)/(vecLightPos.w))); //simple attenuation
Out.lightDir.xyz += normalize(Ln)*att;

Out.lightColor.xyz += ((vecLightColor.xyz)*att);
Out.lightColor.w += att;
}

Out.viewDir.xyz = (vecViewPos - worldPos);


Pixelshader (just the standard lighting stuff) :

half3 Hn = normalize(In.viewDir.xyz + In.lightDir);
half2 lighting = saturate( lit( dot(In.lightDir.xyz*1.2, In.normal.xyz) , dot(Hn.xyz, In.normal.xyz) , specPower ) ).yz;

half3 result = (lighting.x * In.lightColor.xyz) + (lighting.y * pow(In.lightColor.a,1.2) * lighting.x);


Haven't used this in a while since i'm using inferred lighting now, but it did a pretty good job back then, especially when combined with more expensive techniques like POM or SSS.
I can't stress enough though, that this is very hacky and plain wrong from a mathematical point... but at the end of the shader-day it's the (faked) results that count and not correct math :wink:

Hope this helps :)

It sounds like you want him to use a lower resolution light buffer than the G buffer. For this you need inferred rendering (which would also provide him with some flexibility in the way of transparency).

However, with a 2 week timeline, there's now way he'll be able to provide a complete solution in time, using that method


(Also scene depth in a deferred setup would not go into a square buffer unless your final output was also square, btw.)



From what I understand there is no G buffer, this game has a forward rendering system. What I am saying is do a light pre-pass but at lower resolution that final render:
http://diaryofagraphicsprogrammer.blogspot.com/2008/03/light-pre-pass-renderer.html
Basically as described there but simplified as needed (only render light diffuse coefficient into lighting properties, don't store normal ...).
When applied to the final render it might make the lighting at discontinuities a bit fuzzy, but might be acceptable.

This topic is closed to new replies.

Advertisement