Jump to content

  • Log In with Google      Sign In   
  • Create Account


[DX11] Battlefield 3 GBuffer Question


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 360GAMZ   Members   -  Reputation: 133

Like
0Likes
Like

Posted 06 December 2011 - 07:05 PM

I've been pouring through the DICE presentations on Battlefield 3's rendering technology. I have a question on this presentation:

http://www.slideshare.net/fullscreen/DICEStudio/shiny-pc-graphics-in-battlefield-3/1

On slide 70, they start talking about the various GBuffers used in their deferred renderer:

- Diffuse (slide 70)
- Normals (slide 71)
- Specular (slide 72)
- Smoothness (slide 73)
- Sky visibility (slide 74)

Diffuse, Normals, and Specular GBuffers are self explanitory. But what do the Smoothness and Sky Visiblity GBuffers contain and how do you supose they're used?

Sponsor:

#2 Strychnine.213   Members   -  Reputation: 156

Like
0Likes
Like

Posted 06 December 2011 - 07:18 PM

I've been pouring through the DICE presentations on Battlefield 3's rendering technology. I have a question on this presentation:

http://www.slideshar...battlefield-3/1

On slide 70, they start talking about the various GBuffers used in their deferred renderer:

- Diffuse (slide 70)
- Normals (slide 71)
- Specular (slide 72)
- Smoothness (slide 73)
- Sky visibility (slide 74)

Diffuse, Normals, and Specular GBuffers are self explanitory. But what do the Smoothness and Sky Visiblity GBuffers contain and how do you supose they're used?


I remember asking my self the same questions when I looked through the whitepapers. My guess is this: I think that the smoothness buffers are used for buffering tessellation in DirectX11 and OpenGL. I am still not sure why you would need to buffer such, but they use a lot of it. I figured the Sky buffering was for the jets. I noticed ingame that the jets can fly outside the bounds of the set points on the map. My guess is that the sky buffers render sky and landscape past the set distance for the jets. I however might be wrong.

#3 Hodgman   Moderators   -  Reputation: 27837

Like
0Likes
Like

Posted 06 December 2011 - 08:39 PM

In most lighting models, "Specularity" can't be expressed by a single number. Traditional models use a "specular mask" (intensity) and a "specular power" (glossiness).

Many of the more complex lighting models use two similar but different parameters -- "roughness" and "index of refraction" instead. IOR is a physically measurable property of real-world materials, and using some math, you can convert it into a "specular mask" value, which determines the ratio of reflected vs refracted photons. This alone only describes the 'type' of surface though (e.g. glass and stone will have different IOR values).
Alongside this, you'll also have a 'roughness' value, which is a measure of how bumpy the surface is at a microscopic scale. If you had rediculously high resolution normal maps, you wouldn't need this value, but seeing as normal maps usually only describe bumpiness on a mm/cm/inch scale, the roughness parameter is used to measure bumpiness on a micometer scale.

In simple terms, you can think of the "specular" value as being the same as a "spec mask" or "IOR", and you can think of the "smoothness" as being equivalent to "spec power" or "roughness".


As for the sky-visibility term, I assume it's an input for their ambient/indirect lighting equation.

#4 jariRG   Members   -  Reputation: 633

Like
0Likes
Like

Posted 07 December 2011 - 08:45 AM

There is a video for this presentation:
(specular and smoothness explanation, around 5:20)
(sky visibility, from 6:00)

#5 quaikohc   Members   -  Reputation: 122

Like
0Likes
Like

Posted 21 December 2011 - 02:12 AM

In simple terms, you can think of the "specular" value as being the same as a "spec mask" or "IOR", and you can think of the "smoothness" as being equivalent to "spec power" or "roughness".


but isn't specular power usually stored in 8 bits? why they needed whole render target to store this ?

Przemek

#6 Hodgman   Moderators   -  Reputation: 27837

Like
1Likes
Like

Posted 21 December 2011 - 06:51 AM

but isn't specular power usually stored in 8 bits? why they needed whole render target to store this ?

They don't use a whole render target...? They use a one channel for one specular value, and another channel for the other specular value.

#7 InvalidPointer   Members   -  Reputation: 1370

Like
1Likes
Like

Posted 21 December 2011 - 10:42 AM

In most lighting models, "Specularity" can't be expressed by a single number. Traditional models use a "specular mask" (intensity) and a "specular power" (glossiness).

Many of the more complex lighting models use two similar but different parameters -- "roughness" and "index of refraction" instead. IOR is a physically measurable property of real-world materials, and using some math, you can convert it into a "specular mask" value, which determines the ratio of reflected vs refracted photons. This alone only describes the 'type' of surface though (e.g. glass and stone will have different IOR values).
Alongside this, you'll also have a 'roughness' value, which is a measure of how bumpy the surface is at a microscopic scale. If you had rediculously high resolution normal maps, you wouldn't need this value, but seeing as normal maps usually only describe bumpiness on a mm/cm/inch scale, the roughness parameter is used to measure bumpiness on a micometer scale.

In simple terms, you can think of the "specular" value as being the same as a "spec mask" or "IOR", and you can think of the "smoothness" as being equivalent to "spec power" or "roughness".


As for the sky-visibility term, I assume it's an input for their ambient/indirect lighting equation.


As an addendum-- the Blinn/Phong specular power bit is actually an approximation to evaluating a Gaussian distribution with a specific variance. As Hodgman points out, pretty much every specular model on the market today works on the idea of microfacet reflections-- that all surfaces are actually perfect, mirror-like reflectors. The catch is that when you look at them at a fine enough level of detail, the surfaces themselves are comprised of really, really tiny facets that reflect light in (theoretically) all directions. The Gaussian term from before uses some probability to get around having to evaluate a bunch of reflection math, essentially estimating what percentage of the surface is actually oriented in such a way that it will reflect light towards the viewer (we can do this because of the Law of Large Numbers, which is slightly outside scope). This is actually what the half vector describes, if you think about it. Recall the Law of Reflection for a moment; this states that the angle of incidence is equal to the angle of exitance/reflectance. Therefore, reflecting the light vector around the half vector would yield the view vector.

This has been Physically-Based Shading 101 with InvalidPointer, thanks for playing! :)

EDIT: Some further clarifications.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS