Trying to understand Lighting general.

Started by
10 comments, last by spek 11 years, 2 months ago

Hi,

im quiet new to graphic programming,, been programing for quiet some time now.

Im trying to understand some general of how lighting works (not the shader logic behind it), and how it should be structured with in a program.

Lets say i have X objects in my scene, and 3 different types of light sources lets say that all lights sources affects all objects, Point, Spotlight, Diffuse. As i understand it i need 3 different shaders (same shader compiled with different defines or one that is branched). In a more standard approach (I guees its called Forward Rendering) Do I need to render render each object 3 times with a different shader each time seems a bit inefficient in my untrained eyes. and if so will the output be automatically merged or do i need to something?

If i have same scenario but i use a Deferred approach how would that work?

Advertisement

HI, i acctually found the answer on my fist question in Ricks (not sure what his gamedev is) Tower22 blog

http://tower22.blogspot.dk/2010/11/from-deferred-to-inferred-part-uno.html if anyone has the same question..

So each pixelshader outputs the pixelcolor and the next one will have that pixelcolor as input so no need to do anything for merge the output it seems.

Hey,

Not sure what was on the blog as I wrote that entry a million years ago, but for Deferred Lighting, the recipe is basically as follow:

1- Render pixel attributes to G-Buffers (diffuse, specular, gloss, normal, position/depth, ... whatever you need)

2- For each light (point, spot, whatever) apply a specific shader.

The shader draws pixels (on a texture buffer), using the step1 information.

So, rendering the objects is really separated from the lighting step, which does not involve the 3D meshes of the objects.

Notice that nowadays several newer techniques such as "Tiled Deferred Rendering" are being introduced. The idea remains the same, though it's more optimized. Either way, I would start with a simple deferred renderer first for learning purposes.

Good luck!

As i understand it i need 3 different shaders (same shader compiled with different defines or one that is branched). In a more standard approach (I guees its called Forward Rendering) Do I need to render render each object 3 times with a different shader each time seems a bit inefficient in my untrained eyes.

Yes, it is. The main problem with forward techniques (or the main advantage of deferred) is exactly this decoupling of lighting complexity from shaders.
Nobody says you need 3 shaders and three drawcalls. If you know you have 1 point, 1 spot and 1 area (I suppose that's your diffuse) then just write a shader than evaluates those three.

Accumulating more lighting through multipass is not automatic, it's a blend operation to be turned on. You might have heard blend is slow. Basic ("dumb" would be a better terminology) deferred blends even more so I guess we can all afford it. Or so they say.
What you report as the solution is basically a multi-pass technique doing basically the same thing by using render-texture. It doesn't radically change what's going on conceptually, although the performance will hopefully be different.

Previously "Krohm"

Hey,

Not sure what was on the blog as I wrote that entry a million years ago, but for Deferred Lighting, the recipe is basically as follow:

1- Render pixel attributes to G-Buffers (diffuse, specular, gloss, normal, position/depth, ... whatever you need)

2- For each light (point, spot, whatever) apply a specific shader.

The shader draws pixels (on a texture buffer), using the step1 information.

So, rendering the objects is really separated from the lighting step, which does not involve the 3D meshes of the objects.

Notice that nowadays several newer techniques such as "Tiled Deferred Rendering" are being introduced. The idea remains the same, though it's more optimized. Either way, I would start with a simple deferred renderer first for learning purposes.

Good luck!

Any clue where I should start looking if i want to learn more how i should implement a Defered Rendering.. Haven't really found any good example on it but loads on Forwared Rendering.

Not sure where I got it from... I believe I found code on Delphi3d.net (but that website is gone, maybe the examples are still there somwhere) and from the nVidia SDK 9.5 which has a lot of example programs.

It's not that difficult, but do you know how to render data to texture buffers / G-buffers? The first step would be to render your (opaque) scene into a couple of buffers (note you can render into multiple textures at the same time). Once you have these buffers, you can do all kinds of stuff with them, including deferred lighting. At which point are you right now?

Not sure where I got it from... I believe I found code on Delphi3d.net (but that website is gone, maybe the examples are still there somwhere) and from the nVidia SDK 9.5 which has a lot of example programs.

It's not that difficult, but do you know how to render data to texture buffers / G-buffers? The first step would be to render your (opaque) scene into a couple of buffers (note you can render into multiple textures at the same time). Once you have these buffers, you can do all kinds of stuff with them, including deferred lighting. At which point are you right now?

Well im no where. it seems, currently trying to research the subject so i can understand where i should start. So it seems that i first need to learn how to render multiple texture texture and when I have master that and able to display and fill them.

Accumulating more lighting through multipass is not automatic, it's a blend operation to be turned on. You might have heard blend is slow. Basic ("dumb" would be a better terminology) deferred blends even more so I guess we can all afford it. Or so they say.
What you report as the solution is basically a multi-pass technique doing basically the same thing by using render-texture. It doesn't radically change what's going on conceptually, although the performance will hopefully be different.

So to able to merge the output from the different shaders i need to blend them? Is the blend operation something that I use in the shaders or in the "cpu world".

I don't know the DirectX terms exactly, but indeed, find an example that does "MRT" (multi render target) so you can draw an object into multiple buffers. I'm pretty sure that nVidia SDK has a lot of examples on that. Once you have set that up, try to render several pixel attributes into your textures. For example:

* texture1: rgb = pixel diffuse color a = specular term

* texture2: rgb = pixel world position a = specular glossiness

* texture3: rgb = pixel normal a = ...?

Note this is just an example. You can compress data such as the normals and positions to get more available for other attributes. But this would be an easy start. Also note that this step does not involve any lighting so far.

>> blending

Yes. Compare it with photoshop & layers. In the first layer, you have your scenery with its diffuse colors - no lights yet. On a second layer, you draw a red circle that represents a red point light. Set the layer blending mode to "additive" or "light up", or whatever its called in PS. Then on layer 3, you can make another lamp, and so on. Finally, merge all light layers and multiply it with the first lighting. It's not exactly the same, but pretty close.

Blending is not too fast, though additive blending is pretty simple, and I wouldn't worry about the performance unless you want LOT's of lights and/or target lower end hardware. But once you master Deferred Lighting, you could pick up Compute Shaders which allow you you to do all lights in a single pass, applied on smaller tiles on the screen. The Battlefield3 Frostbite engine has a nice paper that explains this "Tiled Deferred lighting". But anyhow, that's for later concern.

I don't know the DirectX terms exactly, but indeed, find an example that does "MRT" (multi render target) so you can draw an object into multiple buffers. I'm pretty sure that nVidia SDK has a lot of examples on that. Once you have set that up, try to render several pixel attributes into your textures. For example:

* texture1: rgb = pixel diffuse color a = specular term

* texture2: rgb = pixel world position a = specular glossiness

* texture3: rgb = pixel normal a = ...?

Note this is just an example. You can compress data such as the normals and positions to get more available for other attributes. But this would be an easy start. Also note that this step does not involve any lighting so far.

>> blending

Yes. Compare it with photoshop & layers. In the first layer, you have your scenery with its diffuse colors - no lights yet. On a second layer, you draw a red circle that represents a red point light. Set the layer blending mode to "additive" or "light up", or whatever its called in PS. Then on layer 3, you can make another lamp, and so on. Finally, merge all light layers and multiply it with the first lighting. It's not exactly the same, but pretty close.

Blending is not too fast, though additive blending is pretty simple, and I wouldn't worry about the performance unless you want LOT's of lights and/or target lower end hardware. But once you master Deferred Lighting, you could pick up Compute Shaders which allow you you to do all lights in a single pass, applied on smaller tiles on the screen. The Battlefield3 Frostbite engine has a nice paper that explains this "Tiled Deferred lighting". But anyhow, that's for later concern.

Thank you spek and Krohm your invaluable help, I now have plan to work from and some better understanding of Lighting. I will put your'e names in the credits when my super Engine done. Which btw will make UnrealEngine and CryEngine look like armature hour :P

Sure thing ;)

This topic is closed to new replies.

Advertisement