Advertisement Jump to content
  • Advertisement

cgWolf

Member
  • Content Count

    4
  • Joined

  • Last visited

Community Reputation

0 Neutral

About cgWolf

  • Rank
    Newbie

Personal Information

  • Interests
    Art
    Design
    DevOps
    Programming
  1. What leaves me worried is the following excerpt from the presentation notes and the corresponding slides (43-45) I had a look at the presentations & pdf provided by Louis Bavoil in 2008 but was unable to find any details to confirm or disprove the above statement in their material. But my many failed attempts at finding a solution for this problem leave me wondering if I'm trying to solve the impossible (using z-buffers as the only source of information about the scene) ... i.e. figure out the green area(s) in the above picture.
  2. Forgot to mention one thing about the heuristic why I think it would not work properly, already for only a slightly modified scenario from the one above. If I would make the "thin pole" object in the above scene a very "thin wall" like object (just like the other one already in the scene) but with the same orientation and just barely touching the ground, it would still cast exactly the same occlusion even when using the heuristic as I understand it from the paper. That's because the heuristic depends on the assumption that an object's depth is very similar to its screen-space width, which would not be the case for the mentioned geometry. So this is not really a viable solution for me. 😞
  3. Yes, I tried to implement the described heuristic, but it didn't change the resulting occlusion that much in the above case for me (maybe I did something wrong, I'll probably need to give it another try) Multiview AO looks really interesting, I already thought about this myself, if it would be possible to reuse other depth information that is already present in the form of shadow-maps to fill in some of the otherwise absent AO. I will definitely have to look through that paper. Thanks everyone.
  4. Hi, I am currently implementing the ambient occlusion technique described in the paper & presentation "Practical Realtime Strategies for Accurate Indirect Occlusion" (aka Ground-Truth Ambient Occlusion) The results so far are pretty good, I have found one edge case though where I think the technique breaks down and I'm wondering if there is a solution that can solve the problem or if there is no good way to solve this with depth-buffers alone. The problem occurs with thin elongated objects, they cast way too much occlusion especially when viewed from a flat angle. In the below images I put a thin pole at a ~20° angle that sticks into the floor plane, for reference there also is a big cube that touches the ground plane, as well as a very thin wall (with similar thickness as the pole) In the first two images you can see that there is too strong of a occlusion where the pole comes close to the ground plane when viewed from a flat angle. When viewed from above (the second pair of images) the occlusion looks more reasonable (much weaker) As far as I understand it, the problem is caused because the HBAO/GTAO algorithm searches for the horizon angle, but it has no knowledge of discontinuities in the depth-buffer. Therefore in the below case for the fragments of the floor that are below (in Y) AND behind (in Z) of fragments of the thin pole object, will find the steepest horizon angle on the first fragment that lies on the pole object. Therefore the visible part of the hemisphere [V1] is then only defined by the horizon angle that was found on the ground plane [h1], and the second angle points towards the front-most fragment of the pole-object [h2]. That means that the total determined visibility (disocclusion) over the hemisphere is missing the entire visible back half [V2] ... spanning between the floor behind the pole [h4] and the back-tangent horizon onto the pole [h3]. At least that's what I think is the cause for the problem. Is my thinking correct so far ? and if yes, are there ways to fix this issue with the GTAO/HBAO approach ? I already tried out some ideas, that I have come up with, utilizing a back-face Z-Buffer and full on depth-peeling with multiple-layers, but nothing so far has solved the problem without introducing problems for the regular AO cases. Any hints would be much appreciated, Thanks. Scematic Side-View RGB Side-View AO (the thin pole casts way too much occlusion onto the ground plane) Top-View RGB Top-View AO (from the top perspective the problem is not noticable as much)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!