Deferred Lighting for OpenGL ES

Started by
3 comments, last by Seabolt 11 years, 1 month ago

Would it be wise to switch over to deferred rendering in the future to perform lighting? Right now, I'm doing my lighting in shaders, and it's messy since I have to recompile different versions of my model rendering shader pair to take into account all of the different lighting combinations, etc... I can only have 1 of each type of light due to varying constraints, and that really seems to push iOS hardware doing that.

I've heard that deferred lighting is pretty expensive on mobile devices for now since the filtrate for mobile GPUs are far more limited compared to their desktop counterparts. Then there's this OES 3.0 spec that's just been ratified giving us a new take on functionality...

Advertisement

Well with deferred rendering you take on a few disadvantages:

1.) You're very fill rate bound, filling your "G Buffer" is filling four different frame buffers, (if you were using four buffers that is).

2.) Anti-aliasing and Deferred Rendering don't work very well together.

3.) Alpha objects can't really run through a deferred renderer, so you'll have to do a forward pass for any transparent objects.

On mobile devices, solving these problems can be a little unrealistic. But you can definitely give it a shot and find out!

Good luck!

Perception is when one imagination clashes with another

I've been weighing those disadvantages also, but it would mean I could have many lights on-scene, and it would get away from messy shader code. Maybe I'll develop a system for the desktop since MRTs actually exist on that medium, and a year from now when OpenGL ES 3.0 starts to make its way on newer devices, I'll make it OES-compatible. I could imaging that the way I would get past transparency issue is to loop through my rendering scene twice: once just to draw all meshes not completely opaque, and the second time for all opaque objects (or maybe it's the other way around?).

If I want to apply post-processing effects, I've already got the scene rendered to a texture.

1.) You're very fill rate bound, filling your "G Buffer" is filling four different frame buffers, (if you were using four buffers that is).

This can be solved with the Light-Pre-Pass technique. Granted, you'll be rendering the scene twice - once for pre lighting, and once for post lighting - you won't require a "fat G-Buffer" as with regular deferred.

Here's a proof of concept already implementing Light-Pre-Pass on an iPhone:

http://www.altdevblogaday.com/2012/03/01/light-pre-pass-renderer-on-iphone/

Saving the world, one semi-colon at a time.

@MajorTom - That light pre-pass is pretty cool, though I imagine it wouldn't be as feasible on some of the lower end mobile devices.
@Vincent_M - The issue you run into with deferred rendering and transparency is that you can only store one depth per pixel. You could do multiple passes, (which would be pretty expensive), but if you ever had alpha over alpha, you'd run into the same issue again.

Perception is when one imagination clashes with another

This topic is closed to new replies.

Advertisement