Sign in to follow this  
matt77hias

Forward and Deferred Buffers

Recommended Posts

My current setup for Forward and Deferred (Forward does not use a GBuffer):

  • GBuffer
  1. Base Color buffer (4 bytes/texel)
  2. Material buffer (4 bytes/texel)
  3. Normal buffer (4 bytes/texel)
  4. Depth buffer (4 bytes/texel)
  • Ping-pong HDR buffers (post-processing)
  1. HDR 1 (16 bytes/texel)
  2. HDR 2 (16 bytes/texel)
  • Swap Chain
  1. Back buffer (4 bytes/texel)
  2. Depth buffer (4 bytes/texel)

Currently none of these buffers is shared. I wonder if it would be beneficial to share some of them to reduce the number of buffers:

  • GBuffer depth buffer + Swap Chain depth buffer
  • GBuffer base color buffer + HDR 1
  • GBuffer material buffer + Swap Chain back buffer (or should one directly use an sRGB texture for the Swap Chain back buffer?)

 

Edited by matt77hias

Share this post


Link to post
Share on other sites
1 hour ago, matt77hias said:

Currently none of these buffers is shared. I wonder if it would be beneficial to share some of them to reduce the number of buffers:

In some of the games I've worked on, we've used a render-target-pool. When doing a pass that temporarily needs some buffers, you ask the pool for them (e.g. bloom might request 1x ping-pong with HDR bit-depth, 1x half-resolution with 8-bit bit-depth, and 2x quarter resolution with 8-bit bit-depth), then when you're finished using them, you return them to the pool so that subsequent rendering passes can possibly reuse them.

In D3D12 and consoles, you can actually allocate multiple textures over the top of each other. E.g. maybe 2x 8-bit textures and 1x 16 bit texture in the same bit of memory. You can then start the frame by using the 2x 8-bit textures, and when you're finished with them, you can mark them as invalid, mark the 16-bit texture as valid, and then start using that 16-bit texture for ping-ponging, etc... This kind of this is not possible on older API's though.

1 hour ago, matt77hias said:
  • GBuffer base color buffer + HDR 1

This doubles the size of your colour data in the GBuffer. Often GBuffer passes are bottlenecked by memory bandwidth, so you want to keep the GBuffer size as low as possible. :(

1 hour ago, matt77hias said:

GBuffer material buffer + Swap Chain back buffer

In some API's, reading from the swap-chain is disallowed. Make sure that it's allowed in the APIs that you're using :)

1 hour ago, matt77hias said:

GBuffer depth buffer + Swap Chain depth buffer

You don't even need a swap-chain depth buffer at all. The monitor doesn't display depth ;)

Share this post


Link to post
Share on other sites
13 minutes ago, Hodgman said:

This doubles the size of your colour data in the GBuffer. Often GBuffer passes are bottlenecked by memory bandwidth, so you want to keep the GBuffer size as low as possible.

The idea was to have some more precision if using linear space. But I am going to convert to gamma. 

On the other hand one of the HDRs will be idle till lighting is finished, so I thought using it earlier. But you're right bandwidth is also important (I seem a bit biased to memory consumption).

13 minutes ago, Hodgman said:

You don't even need a swap-chain depth buffer at all. The monitor doesn't display depth

:o is actually an artifact from the time I started d3d programming. It is the depth buffer for forward rendering, didn't really found a good class to store that one in :P . So I dropped it in my Renderer class which only owns the adapter, output, device, device context, swap chain, back buffer RTV, DSV and display configuration, but doesn't really do actual "rendering".

Edited by matt77hias

Share this post


Link to post
Share on other sites
8 hours ago, matt77hias said:
  • Ping-pong HDR buffers (post-processing)
  1. HDR 1 (16 bytes/texel)
  2. HDR 2 (16 bytes/texel)

I hope you mean 16-bits/channel here rather than 16 bytes/texel! Using R32G32B32A32_FLOAT for the HDR buffers would be practically unheard of. R16G16B16A16_FLOAT is fine for almost everything you would want to do and R11G11B10_FLOAT is commonly used for even most AAA titles.

Share this post


Link to post
Share on other sites
53 minutes ago, ajmiles said:

I hope you mean 16-bits/channel here rather than 16 bytes/texel! Using R32G32B32A32_FLOAT for the HDR buffers would be practically unheard of. R16G16B16A16_FLOAT is fine for almost everything you would want to do and R11G11B10_FLOAT is commonly used for even most AAA titles.

I actually meant 16 bytes/texel. You can use half floats instead but I wonder how HDR is that? In my native understanding full floats come at the cost of 4x 4byte/texel textures you use somewhere in a game? Or for full HD something like 2^25 = 32MB.

Share this post


Link to post
Share on other sites

Half floats were literally invented for HDR :)

As above, using full floats is unheard of as doubling your memory bandwidth really will impact performance a lot. 

Half's go from 0 to around 60k, which is a pretty massive dynamic range. You can exceed the maximum value quite easily with a lot of bright lights, which results in 'inf' being stored in that pixel, so you do need to check for that / clamp at 60k.

If you need more range, you can move the multiply by 'exposure' out of your tone mapping shader and into every forward/lighting shader instead. As above, this is how people manage to get away with using even 11_11_10_FLOAT.

 

18 hours ago, matt77hias said:

and all of these resources had a SRV, RTV and UAV attached?

They had different flags. Making a resource UAV compatible may affect the memory layout compared to just SRV compatible, so you should use the minimum set of flags during resource creation. Likewise, some had mips while others didn't, etc. 

When allocating from the pool you'd specify size, formats, usage, capability, etc. 

Share this post


Link to post
Share on other sites
1 hour ago, matt77hias said:

How do you define "exposure"?

In real life, it's basically the amount of light that the sensor/film is exposed to in order to create the image.

  • Increasing the aperture (the size of the hole that lets in light) will increase the amount of light coming in, but will also increase the strength of depth of field effects / narrow the depth of focus.
  • Increasing the shutter time (the time that the hole is open) will increase the amount of light coming in, but will also increase the strength of motion blur effects.
  • Increasing the ISO value (the sensitivity of the sensor/film) will increase the amount of light that is captured, but will also increase the strength of film-grain / noise effects.

Many engines are now trying to model real cameras, so that people trained with real-world tools will be immediately comfortable in engine, and also so that in-game post-processing effects look more natural / more like cinema.

If you're not trying to emulate a real camera though, then "exposure" is just an arbitrary multiplier that you use to rescale the input data as the first step of a tonemapper. You can either pick an exposure value manually for each scene in your game, or use an auto-exposure technique where you look at the average brightness of the lighting buffer (or more commonly - the geometric mean of the luminance in the lighting buffer) and use that average value to pick a suitable exposure value where the final image won't be too bright or too dark.

Share this post


Link to post
Share on other sites
5 minutes ago, Hodgman said:

In real life, it's basically the amount of light that the sensor/film is exposed to in order to create the image.

  • Increasing the aperture (the size of the hole that lets in light) will increase the amount of light coming in, but will also increase the strength of depth of field effects / narrow the depth of focus.
  • Increasing the shutter time (the time that the hole is open) will increase the amount of light coming in, but will also increase the strength of motion blur effects.
  • Increasing the ISO value (the sensitivity of the sensor/film) will increase the amount of light that is captured, but will also increase the strength of film-grain / noise effects.

Many engines are now trying to model real cameras, so that people trained with real-world tools will be immediately comfortable in engine, and also so that in-game post-processing effects look more natural / more like cinema.

So radiance?

Share this post


Link to post
Share on other sites
10 minutes ago, matt77hias said:

So radiance?

No. To be deliberately simplify -- let's say that radiance is what the sensor is measuring. Exposure would determine what the maximum measurement could possibly be.

The sensor isn't measuring radiance really though - Radiance is energy (emitted or received) per angle, per area, per time. Shutter speed increases time. Aperture increases angle. ISO increases efficiency (the percentage of energy actually captured and not lost as heat, reflected, etc). Exposure is the combination of these three -- so increasing exposure actually allows the same radiance input to act over a longer period of time / larger angle, which allows more energy to be delivered to the sensor.

You can have a very high exposure value (large shutter time, large aperture, large ISO value), but if you don't shine any light (radiance) through the lens, the sensor won't report any values ;)

Likewise if you shine a constant radiance value through the lens, but vary the exposure value, the sensor will report higher or lower energy values.

tl;dr - it's basically an arbitrary multiplier that you can use to make everything brighter or darker.

Share this post


Link to post
Share on other sites
41 minutes ago, Hodgman said:

No. To be deliberately simplify -- let's say that radiance is what the sensor is measuring. Exposure would determine what the maximum measurement could possibly be.

Radiance is actual energy (emitted or received) per angle per area per time. Shutter speed increases time. Aperture increases angle. ISO increases the percentage of energy actually (not lost as heat, reflected, etc). Exposure is the combination of these three -- so increasing exposure actually allows the same radiance input to act over a longer period of time / larger angle, which allows more energy to be delivered to the sensor.

You can have a very high exposure value (large shutter time, large aperture, large ISO value), but if you don't shine any light (radiance) through the lens, the sensor won't report any values ;)

Ah so the pixel intensity:

$$I_{j} = \int_{\Lambda} \int_{T} \int_{\mathcal{M}} h_{j}(\lambda, t, \mathrm{path}) L(\lambda, t, \mathrm{path}) \mathrm{d}\lambda \mathrm{d}t \mathrm{d}\mathrm{path}$$

Edit: Damn, I was hoping for MathJax support on GameDev.net.

$$I_{j}$$ -> intensity of pixel j

$$\Lambda$$ -> wavelength domain

$$T$$ -> time domain (so basically the shutter interval)

$$\mathcal{M}$$ -> the scene manifold (which includes the camera sensor)

$$h_{j}$$ -> response function of pixel j (which includes the sensitivity)

$$\mathrm{path}$$ -> some path connecting the camera point through the camera aperture via some possible zero-sequence of other points on the light

$$L$$ -> radiance

 

Edited by matt77hias

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this