Jump to content
  • Advertisement
Sign in to follow this  
racarate

PBR Sanity Check (Black Metal)

This topic is 746 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was writing a simple albedo-metallic-roughness demo this morning and I noticed that when using a single point light a pure metal doesn't show up at all except for its specular glints.

 

I realize this is part of the currently popular theory, that metals have zero diffuse response...  but it seems nutty to me that you would have completely black pixels when the metal surface doesn't line up with NdotH (or VdotR, depending on your specular formula).

 

Am I only noticing this because I am using a single point light?  Is this effect not a problem because most games use multiple lights?  Or because they use ambient cubes?  Or are materials just never set to pure metal?

Edited by racarate

Share this post


Link to post
Share on other sites
Advertisement

A roughness of zero is hard to achieve in reality and if you take microfacets into account you will get some bright points across the surface.

But if you add hdr rendering,tone mapping and depending on the brightness of the light it looks again the same way, pitch black and a bright point.

That's the point where the game/level/graphic designer kicks in and want to define this stuff by it self because he/she wants something cool looking and not physical correct :huh: (learned this lesson twice)

 

In space you have also radiation and photons flying around which add a little bit noise to the image.

Most space graded image sensor are RGGB(Bayer pattern) 12 bit which means half of the noise is green, one quater red and one quater blue and the rest are gray 12 bit sensor(1/3 green, 1/3 red, 1/3 blue).

The contrast in space is huge because of the lack of a filter(atmosphere) which would scatter the light and shift the frequence of light.

Cameras in space compensate this by taking several images whith different shutter time and post process one of them with the information of the other and/or some are using filter wheels. Depending on what you want to achive because you can only get the stars or the planet surface but not both at the same time.

If you would see a real footage of space, moon surface or other planets you would think they are artifical.

Share this post


Link to post
Share on other sites

A roughness of zero is hard to achieve in reality and if you take microfacets into account you will get some bright points across the surface.

But if you add hdr rendering,tone mapping and depending on the brightness of the light it looks again the same way, pitch black and a bright point.

That's the point where the game/level/graphic designer kicks in and want to define this stuff by it self because he/she wants something cool looking and not physical correct :huh: (learned this lesson twice)

 

In space you have also radiation and photons flying around which add a little bit noise to the image.

Most space graded image sensor are RGGB(Bayer pattern) 12 bit which means half of the noise is green, one quater red and one quater blue and the rest are gray 12 bit sensor(1/3 green, 1/3 red, 1/3 blue).

The contrast in space is huge because of the lack of a filter(atmosphere) which would scatter the light and shift the frequence of light.

Cameras in space compensate this by taking several images whith different shutter time and post process one of them with the information of the other and/or some are using filter wheels. Depending on what you want to achive because you can only get the stars or the planet surface but not both at the same time.

If you would see a real footage of space, moon surface or other planets you would think they are artifical.

 

Interesting. Any places where we can have more information about this maybe ? Thanks in advance.

Share this post


Link to post
Share on other sites

https://en.wikipedia.org/wiki/Bayer_filter

Random 4k image sensor

http://www.sony-semicon.co.jp/products_en/IS/sensor2/img/products/IMX377CQT_ProductSummary_v1.5_20150414.pdf

It use a RGGB pattern and get up to 12bit per channel.

Depending on the hardware of such a chip there are different bit per channel information and real sensed information.

E.g. a cheap image sensor can grab 10bit red, 12bit green and 10bit red in a 32bit pixel in the memory buffer but it only sense 8bit, 10bit, 8bit and scale up into the intermediate format.

Also important the raw image is as it said raw, it's larger as the final image because it includes the pixel from the border of the sensor which are cut out in a post processing.

This you can see in the specs linked. This pixel on the border are partially black or don't provide all 3 color because they are covered by the case material and/or receive less photons. It looks a little bit like vignetting effect but the one you see in movies come from panels at the end of the objective.

https://en.wikipedia.org/wiki/Vignetting

The colors are also not in the way you expect them to be, they have to be mapped to a color space by using calibrated function for each channel.

For each sensor you have different one and the 3 sensor I were working with provided them with a firmware on the sensor pcb.

The result after a lot of post processing you receive a part of the image from the sensor in a specific color space like yuv with 8 bit per channel.

You get better results if the range per color channel is higher because you lose less on the conversion into an other color space.

 

In our rover we use RGGB and Gray Image sensor with 2k*2k resolution. The gigapixel panorama is done with the gray image sensor and a filter wheel.

The filter wheel consists of different filter like red, blue, green, infrared.

The camera I using on an other project is this and here the specs.

 

As you can see in the image it's a part of a raw image grab of a single frame without correction and filtering, just color space conversion to 8r8g8b.

You can see the noise in the dark area where photons hit the sensor during the time the sensor gather input.

There is also color shifting all over the place which can be fixed with additional frames or longer shutter times(not a good idea).

 

Physically base rendering is pretty complex topic and you have different solutions with different abstractions.

Currently we have BRDF in the game industrie and start to see the first steps further.

The current problem is, that we're using point meassurements of real materials and use them as a full material which is simply wrong.

Thats the biggest problem why even BSSRDF renderings look artifical you need material imperfection which means much more meassure points of the same material on different places and generate a lookup map from this. This is currently done by hand from texture artists till they think it looks right or no time is left for further polishment.

This is also the reason the most games I saw use splatmaps to combine multiple materials and then adjust the roughtness, metallic and other textures by hand afterwards.

The Order 1886 developer build a material scanner(last pages) and bought alot of samples and take pictures with different light setups to obtain more correct information about the materials already including variation and imperfection.

With this library and splatmaps they combine multiple materials.

 

Here you can find a awesome description and a live demo to micro facets or also called roughtness in some BRDF renderer.

This is only a abstraction because micro facet can be really irregular like snow or even reflect different colors in different angles like some metallic coats on car.

This can be achived by tiny metal flakes with different colors in the transparent paint which align randomly or get aligned by the power of science in a specific formation.

It's also possible to layer three colors and cut in a specific angle with the laser to see one color from one angle and the other from an other angle and else the top color of the stack.

The last technique is used in the industrie to build filter to allow a camera to see through a glas but the light from an other direction will reflected by a specific amount.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!