the best ssao ive seen

Started by
237 comments, last by Paksas 12 years, 5 months ago
Looks great ArKano22!

After inspecting the RM project I have a few questions.

I see that you've switched to using view space positions when calculating the height difference instead of just depth, how do you feel this compares quality wise to using just depth?

getPosition uses tex2Dlod but always pick from the top mip map, is there a trick to this is or could one simply just use no mip maps together with tex2D?

In the full screen pass you pass the world -> projection matrix from the hebe statue to transform the cube used for sampling ( if I'm not mistaken hehe ). I don't quite see how this would be done when there's multiple objects being rendered. Also, couldn't 14 vector3s used for the cube be pre transformed as they don't change in the pixel shader? I'm thinking it could be a major speed up to not have to do 14 matrix multiplications per pixel.
Advertisement
Quote:Original post by Ylisaren
Looks great ArKano22!

After inspecting the RM project I have a few questions.

I see that you've switched to using view space positions when calculating the height difference instead of just depth, how do you feel this compares quality wise to using just depth?

getPosition uses tex2Dlod but always pick from the top mip map, is there a trick to this is or could one simply just use no mip maps together with tex2D?

In the full screen pass you pass the world -> projection matrix from the hebe statue to transform the cube used for sampling ( if I'm not mistaken hehe ). I don't quite see how this would be done when there's multiple objects being rendered. Also, couldn't 14 vector3s used for the cube be pre transformed as they don't change in the pixel shader? I'm thinking it could be a major speed up to not have to do 14 matrix multiplications per pixel.


Thanks :).

For this method i always used positions instead of depth. This is because i do not use only the distance/difference between samples but also angular difference between the receiver´s normal and the vector from the receiver to the occluder. To compute this vector, positions are needed. The quality is better, and the haloing almost disappears. With this method you can also control the amount of self-occlusion, from 0 to completely self occluded. Worth the extra space used to store position, in my oppinion. But as always, it´s a tradeoff quality vs speed.

About the matrix used for the cube, it is the view transform as you say. You could precompute all positions in the vertex shader or even better in the cpu then pass it to the shader, because it´s not having perspective into account. Using this transform is "wrong", but it works well for usual fovs. For extreme fovs it is not as accurate but still looks good.

I´m quite unsure about what method is better: pure 2D sampling or view space sampling. 2D usually requires more samples but it is simpler. It´s a matter of taste, i guess.
Quote:Original post by REF_Cracker
Results - Depth Buffer Only
Sampling 8 points per pixel.


That´s quite impressive! If you could reduce haloing...that´s the only showstopper i see (around the arm, its too dark)
Quote:Original post by ArKano22
Thanks :).

For this method i always used positions instead of depth. This is because i do not use only the distance/difference between samples but also angular difference between the receiver´s normal and the vector from the receiver to the occluder. To compute this vector, positions are needed. The quality is better, and the haloing almost disappears. With this method you can also control the amount of self-occlusion, from 0 to completely self occluded. Worth the extra space used to store position, in my oppinion. But as always, it´s a tradeoff quality vs speed.

you could compute positions from depth and use that, then, too, right? (just asking for verification). so there's no real quality difference, just computation speed against storage space.
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Quote:Original post by davepermen
Quote:Original post by ArKano22
Thanks :).

For this method i always used positions instead of depth. This is because i do not use only the distance/difference between samples but also angular difference between the receiver´s normal and the vector from the receiver to the occluder. To compute this vector, positions are needed. The quality is better, and the haloing almost disappears. With this method you can also control the amount of self-occlusion, from 0 to completely self occluded. Worth the extra space used to store position, in my oppinion. But as always, it´s a tradeoff quality vs speed.

you could compute positions from depth and use that, then, too, right? (just asking for verification). so there's no real quality difference, just computation speed against storage space.


Yes, that's how I'm going to do it. The view space reconstruction implementation I use depends on vector interpolation between vertex -> pixel shader however, which can't be used when you're sampling arbitrary texels. As long as one has the 4 frustum corners ( even less than that is required ) one can construct the far position very cheaply per sample though.
can't wait to test it once i start programming a game again :) thanks for the reply.
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Krulspeld:
Yup you are right, this is how it was intended to be:
for (int i = 0 ; i < rings; i += 1){  for (int j = 0 ; j < samples*i; j += 1)  {    float step = PI*2.0 / (samples*float(i));    pw = (cos(float(j)*step)*float(i));    ph = (sin(float(j)*step)*float(i))*aspect;    d = readDepth( vec2(texCoord.s+pw*w,texCoord.t+ph*h));    ao += compareDepths(depth,d);    s++;	  }}

Also I just found out that Arkano has already tried this or similar method before :D

REF_Cracker:
Awesome result!
Awesome results, especially for the depth only REF_Cracker & martinsh!

I'll just post here to keep all SSAO stuff on topic!
I'm trying to implement a depth only but there is.. something off.
I'm using this as a source:
http://wiki.gamedev.net/index.php/D3DBook:Screen_Space_Ambient_Occlusion

As far as I know, everything is set up correctly. But my results (unblurred SSAO buffer) is more like this:


It makes a little bit sense.. In the right direction.. But what I don't get is the "boxiness" of the sampling. Also I do not get how relatively flat surfaces like the ground or the side of the boxes have all that noise on them (also the skybox, which should be at depth = 1 since I cap/normalize depth at 100m).

This looks very cool! Great job on the modelling
http://www.codersquare.net/ - CoderSquare.net general programming community
Quote:Original post by baradrasl
Awesome results, especially for the depth only REF_Cracker & martinsh!

I'll just post here to keep all SSAO stuff on topic!
I'm trying to implement a depth only but there is.. something off.
I'm using this as a source:
http://wiki.gamedev.net/index.php/D3DBook:Screen_Space_Ambient_Occlusion

As far as I know, everything is set up correctly. But my results (unblurred SSAO buffer) is more like this:


It makes a little bit sense.. In the right direction.. But what I don't get is the "boxiness" of the sampling. Also I do not get how relatively flat surfaces like the ground or the side of the boxes have all that noise on them (also the skybox, which should be at depth = 1 since I cap/normalize depth at 100m).

I can comment on that article [grin]. Have you tried out the demo program that comes with the chapter? That implementation was very sensitive to how the parameters are adjusted. There was quite a bit of playing around needed to get it working correctly. I don't recall having so much noise on a flat surface though... did you directly use the shader or have you made some changes to it?

If you still have trouble getting it to look correctly, you could always try out one of the other methods that have been posted here - if it is a depth only technique it should be nearly interchangeable.

This topic is closed to new replies.

Advertisement