Deferred rendering shimmering

Started by
16 comments, last by macnihilist 13 years, 6 months ago
In my implementation of deferred rendering, there is an anoying shimmering when the camera moves, that only occurs in specific parts of the geometry (parts that are very thin, when seen from far away, eg: window borders in buildings)

The resolution of the FBOs is already very big (4096x1080), altough even increasing it more would not solve this.

I think is it mainly caused by the lack of antia-alising when using FBOs?
What solutions there are for this problem?

Advertisement
I think this is due to your depth buffer precision and i don't think
it can be avoided unless you use some blur + edge detection shader
(fake AA) on your deferred engine.
Quote:Original post by belfegor
I think this is due to your depth buffer precision and i don't think
it can be avoided unless you use some blur + edge detection shader
(fake AA) on your deferred engine.


My position vector is stored in 16bit float format, so it should not have a problem with precision right?
Could you (or someone else) provide some pointers to fake AA in a deferred engine?
Quote:Original post by Relfos
My position vector is stored in 16bit float format, so it should not have a problem with precision right?
That may be the exact problem. A half-precision float has 11 bits of precision (10 bits stored) only.

Which means if "far away" is 200 meters, then everything smaller than 10 cm will be the same. Which is probably ok for some things, but may not be ok for "touchy" things like specular highlights.

Have you considered applying LOD to your specular term, i.e. scaling it down with distance? This is a common hack to avoid "funny sparkles" when rendering oceans and such.
I was thinking about this, well, I am actually not using the depth buffer nor the position for calculating the lighting (I've removed all point lights and specular calculations now for testing)

This still happens with just one directional light, but I can somehow understand why, when some thin objects are far away, their size on the screen is one pixel or less, so this causes a 'fight'. Is implementing AA the only solution?
If so, any link with information about a implementation?
Quote:Original post by Relfos
Quote:Original post by belfegor
I think this is due to your depth buffer precision and i don't think
it can be avoided unless you use some blur + edge detection shader
(fake AA) on your deferred engine.


My position vector is stored in 16bit float format, so it should not have a problem with precision right?
Could you (or someone else) provide some pointers to fake AA in a deferred engine?


Storing position in an fp16 texture is pretty awful, precision-wise. I did some tests a while ago on my blog, if you want to see for yourself.
Quote:
...any link with information about a implementation...


Try this or google.
try checking this link out


http://www.mvps.org/directx/articles/linear_z/linearz.htm


This will explain how to make your depth work for you!

Wisdom is knowing when to shut up, so try it.
--Game Development http://nolimitsdesigns.com: Reliable UDP library, Threading library, Math Library, UI Library. Take a look, its all free.
Quote:Original post by smasherprog
try checking this link out


http://www.mvps.org/directx/articles/linear_z/linearz.htm


This will explain how to make your depth work for you!


No it won't. What he does in that article can totally break things like z-compression and early-z cull. It also totally fails for triangles that cross the near-clip plane.

In my engine I use only 16 fp RGBA render targets(like SCII) and store the linear depth value into two channels (in the other two channels of the corresponding texture I write the encoded normal). Using a single 16 fp value for depth was not enough, but using two delivers an artifact free representation.


Thought I keep a standard 24 z + 8 stencil buffer for fast z- and stencil tests.

This topic is closed to new replies.

Advertisement