Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


#ActualHodgman

Posted 15 January 2013 - 06:40 PM

I'm just saying, going for top notch lighting/shading, made all engines go deferred on this generation of consoles.

But, not all engines did go deferred...? There's an absolute ton of forward rendered current-gen games, many with superb lighting!


I'm not saying that every game should go one way or another, but that the optimal pipeline will depend on the game (as opposed to, "it's impossible to go forward, forward doesn't work, no engines use forward rendering").

"Top notch" lighting/shading doesn't always mean "thousands of lights" -- like you said, with a racing game maybe you only need a few lights, but you instead need really complex BRDFs (like your Bugatti IOTD biggrin.png), and quite a few different ones at that. That's still "top notch, advanced lighting", despite not having 5000 tiny point lights...

 

Is your game about 1000 glowing sparks, or 1000 different kinds of paint? Each requires a different "advanced lighting" pipeline...

 

To take things to the extreme, imagine we've got 1000 lights covering the entire screen (very advanced wink.pngtongue.png)

For deferred, we have 1000 passes of the screen, where we read 96-128bytes of G-Buffer data and write out 64bit of HDR lighting data --- 156-188KiB of bandwidth per pixel, or over 100GiB total at 720p (an impossible amount for current gen).

For forward, let's say we've can do 10 lights per pass, thus we'd do 100 passes of the screen, where we read 64-96bytes of data (everything we would've written into the G-buffer, except hardware depth as we've got it intrinsically) and write out 64bit of HDR lighting data --- 13-16KiB of bandwidth per pixel --- 11-14GiB total at 720p (still an insane amount, but maybe low enough for 2fps).

 
So both techniques fail miserably with thousands of large lights (though traditional forward actually does better than traditional deferred), but yes, if you want thousands of small lights applied to arbitrary objects, then deferred is a winner simply because it allows you to associate lights with screen-space areas, instead of associating lights with objects themselves.
However, light-index deferred and Forward+ both also use this same screen-space light association technique, but do their actual lighting using forward-rendering (and they're both implementable on current-gen consoles!!), so deferred isn't your only option for these situations.
 
Also, deferred-lighting ("light pre-pass"), or inferred lighting shouldn't be in the same category as regular deferred shading, as they have advantages/disadvantages from both traditional forward and deferred approaches. They're some of the hybrids that doesn't easily fit into either traditional black-and-white category. There's a huge number of console games that live in this grey area.
e.g. Uncharted perform deferred-lighting, but only for dynamic lights affecting the environment, and forward render everything else.
Or, in my last game, we forward rendered several lighting terms, then calculated deferred shadow masks after lighting, then combined the terms/masks in post. That's not traditional forward or deferred, but one of these weird hybrids... We also didn't require lighting to exactly match the environment, so for dynamic objects we constructed light positions per-pixel, like God of War does (except with several resulting lights, instead of merging them all into 1), and we could avoid putting them behind the pixel to save on having to do a backfacing test (every light calculation gave bang for buck). Using this, we could get something that looked like it was lit by a dozen lights with only two light evaluations (and even gave a kind of cheap "ambient BRDF" GI), which was fine for our game.

 

as you want more advanced lighting, like Crysis, ... you can't go forward on current gen   ...   Crysis is forward shaded

?!

Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

This is a bit misleading, because it's standard practice with forward renderers to use a z-pre-pass, so that there isn't any over-draw.
 
Also, the g-buffer pass of deferred suffers the same issue, which may be a significant cost depending on your MRT set-up and your shaders (e.g. expensive parallax mapping done during the g-buffer pass), but again, you could solve this with a ZPP, if required.
 
Screen-dependant shading complexity is more important when considering that pixel-shaders are run on 2x2 quads of pixels.

In a deferred (screen-space) lighting pass, an entire model can be lit by drawing a quad(polygon) over the top of it, in which every pixel-quad(2x2 pixels) is processed fully, regardless of the underlying geometry, so your quad efficiency is 100%.

On the other hand, if you do the lighting during forward rendering, then many of your model's triangles will only cover portions of pixel-quads (any edge that isn't aligned to the x/y axis will cut through many quads, partially covering them), which leads to a large amount of wasted shading and forces you to aggressively LOD your models so that you have large triangles. In the worst case, if your models are made up of pixel-sized triangles, then your quad efficiency is only 25%, which means your pixel shaders are effectively 4 times slower than they should be.


#6Hodgman

Posted 15 January 2013 - 06:24 PM

I'm just saying, going for top notch lighting/shading, made all engines go deferred on this generation of consoles.

But, not all engines did go deferred...? There's an absolute ton of forward rendered current-gen games, many with superb lighting!


I'm not saying that every game should go one way or another, but that the optimal pipeline will depend on the game (as opposed to, "it's impossible to go forward, forward doesn't work, no engines use forward rendering").

"Top notch" lighting/shading doesn't always mean "thousands of lights" -- like you said, with a racing game maybe you only need a few lights, but you instead need really complex BRDFs, and quite a few different ones at that. That's still "top notch, advanced lighting", despite not having 5000 tiny point lights...

 

Is your game about 1000 glowing sparks, or 1000 different kinds of paint? Each requires a different "advanced lighting" pipeline...

 

To take things to the extreme, imagine we've got 1000 lights covering the entire screen (very advanced wink.pngtongue.png)

For deferred, we have 1000 passes of the screen, where we read 96-128bytes of G-Buffer data and write out 64bit of HDR lighting data --- 156-188KiB of bandwidth per pixel, or over 100GiB total at 720p (an impossible amount for current gen).

For forward, let's say we've can do 10 lights per pass, thus we'd do 100 passes of the screen, where we read 64-96bytes of data (everything we would've written into the G-buffer, except hardware depth as we've got it intrinsically) and write out 64bit of HDR lighting data --- 13-16KiB of bandwidth per pixel --- 11-14GiB total at 720p (still an insane amount, but maybe low enough for 2fps).

 
So both techniques fail miserably with thousands of large lights (though traditional forward actually does better than traditional deferred), but yes, if you want thousands of small lights applied to arbitrary objects, then deferred is a winner simply because it allows you to associate lights with screen-space areas, instead of associating lights with objects themselves.
However, light-index deferred and Forward+ both also use this same screen-space light association technique, but do their actual lighting using forward-rendering (and they're both implementable on current-gen consoles!!), so deferred isn't your only option for these situations.
 
Also, deferred-lighting ("light pre-pass"), or inferred lighting shouldn't be in the same category as regular deferred shading, as they have advantages/disadvantages from both traditional forward and deferred approaches. They're some of the hybrids that doesn't easily fit into either traditional black-and-white category. There's a huge number of console games that live in this grey area.
e.g. Uncharted perform deferred-lighting, but only for dynamic lights affecting the environment, and forward render everything else.
Or, in my last game, we forward rendered several lighting terms, then calculated deferred shadow masks after lighting, then combined the terms/masks in post. That's not traditional forward or deferred, but one of these weird hybrids... We also didn't require lighting to exactly match the environment, so for dynamic objects we constructed light positions per-pixel, like God of War does (except with several resulting lights, instead of merging them all into 1), and we could avoid putting them behind the pixel to save on having to do a backfacing test (every light calculation gave bang for buck). Using this, we could get something that looked like it was lit by a dozen lights with only two light evaluations (and even gave a kind of cheap "ambient BRDF" GI), which was fine for our game.

 

as you want more advanced lighting, like Crysis, ... you can't go forward on current gen   ...   Crysis is forward shaded

?!

Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

This is a bit misleading, because it's standard practice with forward renderers to use a z-pre-pass, so that there isn't any over-draw.
 
Also, the g-buffer pass of deferred suffers the same issue, which may be a significant cost depending on your MRT set-up and your shaders (e.g. expensive parallax mapping done during the g-buffer pass), but again, you could solve this with a ZPP, if required.
 
Screen-dependant shading complexity is more important when considering that pixel-shaders are run on 2x2 quads of pixels.

In a deferred (screen-space) lighting pass, an entire model can be lit by drawing a quad(polygon) over the top of it, in which every pixel-quad(2x2 pixels) is processed fully, regardless of the underlying geometry, so your quad efficiency is 100%.

On the other hand, if you do the lighting during forward rendering, then many of your model's triangles will only cover portions of pixel-quads (any edge that isn't aligned to the x/y axis will cut through many quads, partially covering them), which leads to a large amount of wasted shading and forces you to aggressively LOD your models so that you have large triangles. In the worst case, if your models are made up of pixel-sized triangles, then your quad efficiency is only 25%, which means your pixel shaders are effectively 4 times slower than they should be.


#5Hodgman

Posted 15 January 2013 - 06:06 PM

I'm just saying, going for top notch lighting/shading, made all engines go deferred on this generation of consoles.

But, not all engines did go deferred...? There's an absolute ton of forward rendered current-gen games, many with superb lighting!


I'm not saying that every game should go one way or another, but that the optimal pipeline will depend on the game (as opposed to, "it's impossible to go forward, forward doesn't work, no engines use forward rendering").

"Top notch" lighting/shading doesn't always mean "thousands of lights" -- like you said, with a racing game maybe you only need a few lights, but you instead need really complex BRDFs, and quite a few different ones at that. That's still "top notch, advanced lighting", despite not having 5000 tiny point lights...
 
 

To take things to the extreme, imagine we've got 1000 lights covering the entire screen (very advanced wink.pngtongue.png)

For deferred, we have 1000 passes of the screen, where we read 96-128bytes of G-Buffer data and write out 64bit of HDR lighting data --- 156-188KiB of bandwidth per pixel, or over 100GiB total at 720p (an impossible amount for current gen).

For forward, let's say we've can do 10 lights per pass, thus we'd do 100 passes of the screen, where we read 64-96bytes of data (everything we would've written into the G-buffer, except hardware depth as we've got it intrinsically) and write out 64bit of HDR lighting data --- 13-16KiB of bandwidth per pixel --- 11-14GiB total at 720p (still an insane amount, but maybe low enough for 2fps).

 
So both techniques fail miserably with thousands of large lights, but yes, if you want thousands of small lights applied to arbitrary objects, then deferred is a winner simply because it allows you to associate lights with screen-space areas, instead of associating lights with objects themselves.
However, light-index deferred and Forward+ both also use this same technique, but do their lighting using forward-rendering (and they're both implementable on current-gen consoles!!), so deferred isn't your only option here.
 
Also, deferred-lighting ("light pre-pass"), or inferred lighting shouldn't be in the same category as regular deferred shading, as they have advantages/disadvantages from both traditional forward and deferred approaches. They're some of the hybrids that doesn't easily fit into either traditional black-and-white category. There's a huge number of console games that live in this grey area.
e.g. Uncharted perform deferred-lighting, but only for dynamic lights affecting the environment, and forward render everything else.
Or, in my last game, we forward rendered several lighting terms, then calculated deferred shadow masks after lighting, then combined the terms/masks in post. That's not traditional forward or deferred, but one of these weird hybrids... We also didn't require lighting to exactly match the environment, so for dynamic objects we constructed light positions per-pixel, like God of War does (except with several resulting lights, instead of merging them all into 1), and we could avoid putting them behind the pixel to save on having to do a backfacing test (every light calculation gave bang for buck). Using this, we could get something that looked like it was lit by a dozen lights with only two light evaluations (and even gave a kind of cheap "ambient BRDF" GI), which was fine for our game.

 

as you want more advanced lighting, like Crysis, ... you can't go forward on current gen   ...   Crysis is forward shaded

?!

Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

This is a bit misleading, because it's standard practice with forward renderers to use a z-pre-pass, so that there isn't any over-draw.
 
Also, the g-buffer pass of deferred suffers the same issue, which may be a significant cost depending on your MRT set-up and your shaders (e.g. expensive parallax mapping done during the g-buffer pass), but again, you could solve this with a ZPP, if required.
 
Screen-dependant shading complexity is more important when considering that pixel-shaders are run on 2x2 quads of pixels.

In a deferred (screen-space) lighting pass, an entire model can be lit by drawing a quad(polygon) over the top of it, in which every pixel-quad(2x2 pixels) is processed fully, regardless of the underlying geometry, so your quad efficiency is 100%.

On the other hand, if you do the lighting during forward rendering, then many of your model's triangles will only cover portions of pixel-quads (any edge that isn't aligned to the x/y axis will cut through many quads, partially covering them), which leads to a large amount of wasted shading and forces you to aggressively LOD your models so that you have large triangles. In the worst case, if your models are made up of pixel-sized triangles, then your quad efficiency is only 25%, which means your pixel shaders are effectively 4 times slower than they should be.


#4Hodgman

Posted 15 January 2013 - 06:03 PM

I'm just saying, going for top notch lighting/shading, made all engines go deferred on this generation of consoles.

But, not all engines did go deferred...? There's an absolute ton of forward rendered current-gen games, many with superb lighting!


I'm not saying that every game should go one way or another, but that the optimal pipeline will depend on the game (as opposed to, "it's impossible to go forward, forward doesn't work, no engines use forward rendering").

"Top notch" lighting/shading doesn't always mean "thousands of lights" -- like you said, with a racing game maybe you only need a few lights, but you instead need really complex BRDFs, and quite a few different ones at that. That's still "top notch, advanced lighting", despite not having 5000 tiny point lights...
 
 

To take things to the extreme, imagine we've got 1000 lights covering the entire screen (very advanced wink.pngtongue.png)

For deferred, we have 1000 passes of the screen, where we read 96-128bytes of G-Buffer data and write out 64bit of HDR lighting data --- 156-188KiB of bandwidth per pixel, or over 100GiB total at 720p (an impossible amount for current gen).

For forward, let's say we've can do 10 lights per pass, thus we'd do 100 passes of the screen, where we read 64-96bytes of data (everything we would've written into the G-buffer, except hardware depth as we've got it intrinsically) and write out 64bit of HDR lighting data --- 13-16KiB of bandwidth per pixel --- 11-14GiB total at 720p (still an insane amount, but maybe low enough for 2fps).

 
So both techniques fail miserably with thousands of large lights, but yes, if you want thousands of small lights applied to arbitrary objects, then deferred is a winner simply because it allows you to associate lights with screen-space areas, instead of associating lights with objects themselves.
 
However, light-index deferred and Forward+ both also use this same technique, but do their lighting using forward-rendering (and they're both implementable on current-gen consoles!!), so deferred isn't your only option here.
 
Also, deferred-lighting ("light pre-pass"), or inferred lighting shouldn't be in the same category as regular deferred shading, as they have advantages/disadvantages from both traditional forward and deferred approaches. They're some of the hybrids that doesn't easily fit into either traditional black-and-white category. There's a huge number of console games that live in this grey area.
e.g. Uncharted perform deferred-lighting, but only for dynamic lights affecting the environment, and forward render everything else.
Or, in my last game, we forward rendered several lighting terms, then calculated deferred shadow masks after lighting, then combined the terms/masks in post. That's not traditional forward or deferred, but one of these weird hybrids... We also didn't require lighting to exactly match the environment, so for dynamic objects we constructed light positions per-pixel, like God of War does (except with several resulting lights, instead of merging them all into 1), and we could avoid putting them behind the pixel to save on having to do a backfacing test (every light calculation gave bang for buck). Using this, we could get something that looked like it was lit by a dozen lights with only two light evaluations (and even gave a kind of cheap "ambient BRDF" GI), which was fine for our game.

 

as you want more advanced lighting, like GearsOfWar, GTA, Crysis, Stalker, ... you can't go forward on current gen ... Crysis is forward shaded

?!

Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

This is a bit misleading, because it's standard practice with forward renderers to use a z-pre-pass, so that there isn't any over-draw.
 
Also, the g-buffer pass of deferred suffers the same issue, which may be a significant cost depending on your MRT set-up and your shaders (e.g. expensive parallax mapping done during the g-buffer pass), but again, you could solve this with a ZPP, if required.
 
Screen-dependant shading complexity is more important when considering that pixel-shaders are run on 2x2 quads of pixels.

In a deferred (screen-space) lighting pass, an entire model can be lit by drawing a quad(polygon) over the top of it, in which every pixel-quad(2x2 pixels) is processed fully, regardless of the underlying geometry.

On the other hand, if you do the lighting during forward rendering, then many of your model's triangles will only cover portions of pixel-quads (any edge that isn't aligned to the x/y axis will cut through many quads, partially covering them), which leads to a large amount of wasted shading and forces you to aggressively LOD your models so that you have large triangles.


#3Hodgman

Posted 15 January 2013 - 05:53 PM

I'm just saying, going for top notch lighting/shading, made all engines go deferred on this generation of consoles.

But, not all engines did go deferred...? There's an absolute ton of forward rendered current-gen games!
I'm not saying that every game should go one way or another, but that the optimal pipeline will depend on the game (as opposed to, "it's impossible to go forward, forward doesn't work, no engines use forward rendering").

"Top notch" lighting/shading doesn't always mean "thousands of lights" -- like you said, with a racing game maybe you only need a few lights, but you instead need really complex BRDFs, and quite a few different ones at that. That's still "top notch, advanced lighting", despite not having 5000 tiny point lights...
 
 

To take things to the extreme, imagine we've got 1000 lights covering the entire screen.

For deferred, we have 1000 passes of the screen, where we read 96-128bytes of G-Buffer data and write out 64bit of HDR lighting data --- 156-188KiB of bandwidth per pixel, or over 100GiB total at 720p (an impossible amount for current gen).

For forward, let's say we've can do 10 lights per pass, thus we'd do 100 passes of the screen, where we read 64-96bytes of data (everything we would've written into the G-buffer, except hardware depth as we've got it intrinsically) and write out 64bit of HDR lighting data --- 13-16KiB of bandwidth per pixel --- 11-14GiB total at 720p (still an insane amount, but maybe low enough for 2fps).

 
So both techniques fail miserably with thousands of large lights, but yes, if you want thousands of small lights applied to arbitrary objects, then deferred is a winner simply because it allows you to associate lights with screen-space areas, instead of associating lights with objects themselves.
 
However, light-index deferred and Forward+ both also use this same technique, but do their lighting using forward-rendering (and they're both implementable on current-gen consoles!!), so deferred isn't your only option here.
 
Also, deferred-lighting ("light pre-pass"), or inferred lighting shouldn't be in the same category as regular deferred shading, as they have advantages/disadvantages from both traditional forward and deferred approaches. It's one of the hybrids that doesn't easily fit into either black-and-white category. There's a huge number of console games that live in this grey area.
e.g. Uncharted perform deferred-lighting, but only for dynamic lights affecting the environment, and forward render everything else.
Or, in my last game, we forward rendered several lighting terms, then calculated deferred shadow masks after lighting, then combined the terms/masks in post. That's not traditional forward or deferred, but one of these weird hybrids...

 

as you want more advanced lighting, like GearsOfWar, GTA, Crysis, Stalker, ... you can't go forward on current gen ... Crysis is forward shaded

?!

Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

This is a bit misleading, because it's standard practice with forward renderers to use a z-pre-pass, so that there isn't any over-draw.
 
Also, the g-buffer pass of deferred suffers the same issue, which may be a significant cost depending on your MRT set-up and your shaders (e.g. expensive parallax mapping done during the g-buffer pass), but again, you could solve this with a ZPP, if required.
 
Screen-dependant shading complexity is more important when considering that pixel-shaders are run on 2x2 quads of pixels.

In a deferred (screen-space) lighting pass, an entire model can be lit by drawing a quad(polygon) over the top of it, in which every pixel-quad(2x2 pixels) is processed fully, regardless of the underlying geometry.

On the other hand, if you do the lighting during forward rendering, then many of your model's triangles will only cover portions of pixel-quads (any edge that isn't aligned to the x/y axis will cut through many quads, partially covering them), which leads to a large amount of wasted shading and forces you to aggressively LOD your models so that you have large triangles.


#2Hodgman

Posted 15 January 2013 - 05:47 PM

I'm just saying, going for top notch lighting/shading, made all engines go deferred on this generation of consoles.

But, not all engines did go deferred...? There's an absolute ton of forward rendered current-gen games!
I'm not saying that every game should go one way or another, but that the optimal pipeline will depend on the game (as opposed to, "it's impossible to go forward, forward doesn't work, no engines use forward rendering").

"Top notch" lighting/shading doesn't always mean "thousands of lights" -- like you said, with a racing game maybe you only need a few lights, but you instead need really complex BRDFs, and quite a few different ones at that. That's still "top notch, advanced lighting", despite not having 5000 tiny point lights...
 
 

To take things to the extreme, imagine we've got 1000 lights covering the entire screen.

For deferred, we have 1000 passes of the screen, where we read 96-128bytes of G-Buffer data and write out 64bit of HDR lighting data --- 156-188KiB of bandwidth per pixel, or over 100GiB total at 720p (an impossible amount for current gen).

For forward, let's say we've can do 10 lights per pass, thus we'd do 100 passes of the screen, where we read 64-96bytes of data (everything we would've written into the G-buffer, except hardware depth as we've got it intrinsically) and write out 64bit of HDR lighting data --- 13-16KiB of bandwidth per pixel --- 11-14GiB total at 720p (still an insane amount, but maybe low enough for 2fps).

 
So both techniques fail miserably with thousands of large lights, but yes, if you want thousands of small lights applied to arbitrary objects, then deferred is a winner simply because it allows you to associate lights with screen-space areas, instead of associating lights with objects themselves.
 
However, light-index deferred and Forward+ both also use this same technique, but do their lighting using forward-rendering (and they're both implementable on current-gen consoles!!), so deferred isn't your only option here.
 
Also, deferred-lighting ("light pre-pass"), or inferred lighting shouldn't be in the same category as regular deferred shading, as they have advantages/disadvantages from both traditional forward and deferred approaches. It's one of the hybrids that doesn't easily fit into either black-and-white category. There's a huge number of console games that live in this grey area.
e.g. Uncharted perform deferred-lighting, but only for dynamic lights affecting the environment, and forward render everything else.
Or, in my last game, we forward rendered several lighting terms, then calculated deferred shadow masks after lighting, then combined the terms/masks in post. That's not traditional forward or deferred, but one of these weird hybrids...

 

as you want more advanced lighting, like GearsOfWar, GTA, Crysis, Stalker, ... you can't go forward on current gen ... Crysis is forward shaded

?!

Shading complexity becomes screen-dependant. This benefit/disadvantage (depending on the application) is shared with Forward+. Assuming just one directional light is used, every pixel is shaded once. In a forward renderer, if you render everything back to front, every pixel covered by a triangle will be shaded multiple times. Hence deferred shader's time will be fixed and depends on screen resolution (hence lower screen res. is an instant win for low end users). A deferred shader/Forward+ cannot shade more than (num_lights * width * height) pixels even if there are an infinite amount of triangles, whereas the Forward renderer may shade the same pixel an infinite number of times for an infinite amount of triangles, overwriting it's previous value. Of course if you're very good at sorting your triangles (chances are the game cannot be that good) Forward renderer may perform faster; but in a Deferred Shader you're on more stable grounds.

This is a bit misleading, because it's standard practice with forward renderers to use a z-pre-pass, so that there isn't any over-draw.
 
Also, the g-buffer pass of deferred suffers the same issue, which may be a significant cost depending on your MRT set-up and your shaders (e.g. expensive parallax mapping done during the g-buffer pass), but again, you could solve this with a ZPP, if required.
 
Screen-dependant shading complexity is more important when considering that pixel-shaders are run on 2x2 quads of pixels. In a deferred (screen-space) lighting pass, an entire model can be lit by drawing a quad over the top of it, in which every quad is processed fully.
On the other hand, if you do the lighting during forward rendering, then many of your triangles will only cover portions of quads (any edge that isn't aligned to the x/y axis will cut through many quads, partially covering them), which leads to a large amount of wasted shading and forces you to aggressively LOD your models so that you have large triangles.


PARTNERS