Jump to content
  • Advertisement
Sign in to follow this  
360GAMZ

DX11 [DX11] Battlefield 3 GBuffer Question

This topic is 2546 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been pouring through the DICE presentations on Battlefield 3's rendering technology. I have a question on this presentation:

http://www.slideshare.net/fullscreen/DICEStudio/shiny-pc-graphics-in-battlefield-3/1

On slide 70, they start talking about the various GBuffers used in their deferred renderer:

- Diffuse (slide 70)
- Normals (slide 71)
- Specular (slide 72)
- Smoothness (slide 73)
- Sky visibility (slide 74)

Diffuse, Normals, and Specular GBuffers are self explanitory. But what do the Smoothness and Sky Visiblity GBuffers contain and how do you supose they're used?

Share this post


Link to post
Share on other sites
Advertisement

I've been pouring through the DICE presentations on Battlefield 3's rendering technology. I have a question on this presentation:

http://www.slideshar...battlefield-3/1

On slide 70, they start talking about the various GBuffers used in their deferred renderer:

- Diffuse (slide 70)
- Normals (slide 71)
- Specular (slide 72)
- Smoothness (slide 73)
- Sky visibility (slide 74)

Diffuse, Normals, and Specular GBuffers are self explanitory. But what do the Smoothness and Sky Visiblity GBuffers contain and how do you supose they're used?


I remember asking my self the same questions when I looked through the whitepapers. My guess is this: I think that the smoothness buffers are used for buffering tessellation in DirectX11 and OpenGL. I am still not sure why you would need to buffer such, but they use a lot of it. I figured the Sky buffering was for the jets. I noticed ingame that the jets can fly outside the bounds of the set points on the map. My guess is that the sky buffers render sky and landscape past the set distance for the jets. I however might be wrong.

Share this post


Link to post
Share on other sites
In most lighting models, "Specularity" can't be expressed by a single number. Traditional models use a "specular mask" (intensity) and a "specular power" (glossiness).

Many of the more complex lighting models use two similar but different parameters -- "roughness" and "index of refraction" instead. IOR is a physically measurable property of real-world materials, and using some math, you can convert it into a "specular mask" value, which determines the ratio of reflected vs refracted photons. This alone only describes the 'type' of surface though (e.g. glass and stone will have different IOR values).
Alongside this, you'll also have a 'roughness' value, which is a measure of how bumpy the surface is at a microscopic scale. If you had rediculously high resolution normal maps, you wouldn't need this value, but seeing as normal maps usually only describe bumpiness on a mm/cm/inch scale, the roughness parameter is used to measure bumpiness on a micometer scale.

In simple terms, you can think of the "specular" value as being the same as a "spec mask" or "IOR", and you can think of the "smoothness" as being equivalent to "spec power" or "roughness".


As for the sky-visibility term, I assume it's an input for their ambient/indirect lighting equation.

Share this post


Link to post
Share on other sites
There is a video for this presentation:
(specular and smoothness explanation, around 5:20)
(sky visibility, from 6:00)

Share this post


Link to post
Share on other sites

In simple terms, you can think of the "specular" value as being the same as a "spec mask" or "IOR", and you can think of the "smoothness" as being equivalent to "spec power" or "roughness".


but isn't specular power usually stored in 8 bits? why they needed whole render target to store this ?

Przemek

Share this post


Link to post
Share on other sites
but isn't specular power usually stored in 8 bits? why they needed whole render target to store this ?
They don't use a whole render target...? They use a one channel for one specular value, and another channel for the other specular value.

Share this post


Link to post
Share on other sites

In most lighting models, "Specularity" can't be expressed by a single number. Traditional models use a "specular mask" (intensity) and a "specular power" (glossiness).

Many of the more complex lighting models use two similar but different parameters -- "roughness" and "index of refraction" instead. IOR is a physically measurable property of real-world materials, and using some math, you can convert it into a "specular mask" value, which determines the ratio of reflected vs refracted photons. This alone only describes the 'type' of surface though (e.g. glass and stone will have different IOR values).
Alongside this, you'll also have a 'roughness' value, which is a measure of how bumpy the surface is at a microscopic scale. If you had rediculously high resolution normal maps, you wouldn't need this value, but seeing as normal maps usually only describe bumpiness on a mm/cm/inch scale, the roughness parameter is used to measure bumpiness on a micometer scale.

In simple terms, you can think of the "specular" value as being the same as a "spec mask" or "IOR", and you can think of the "smoothness" as being equivalent to "spec power" or "roughness".


As for the sky-visibility term, I assume it's an input for their ambient/indirect lighting equation.


As an addendum-- the Blinn/Phong specular power bit is actually an approximation to evaluating a Gaussian distribution with a specific variance. As Hodgman points out, pretty much every specular model on the market today works on the idea of microfacet reflections-- that all surfaces are actually perfect, mirror-like reflectors. The catch is that when you look at them at a fine enough level of detail, the surfaces themselves are comprised of really, really tiny facets that reflect light in (theoretically) all directions. The Gaussian term from before uses some probability to get around having to evaluate a bunch of reflection math, essentially estimating what percentage of the surface is actually oriented in such a way that it will reflect light towards the viewer (we can do this because of the Law of Large Numbers, which is slightly outside scope). This is actually what the half vector describes, if you think about it. Recall the Law of Reflection for a moment; this states that the angle of incidence is equal to the angle of exitance/reflectance. Therefore, reflecting the light vector around the half vector would yield the view vector.

This has been Physically-Based Shading 101 with InvalidPointer, thanks for playing! :)

EDIT: Some further clarifications.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!