• Create Account

# the best ssao ive seen

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

238 replies to this topic

### #201AgentSnoop  Members   -  Reputation: 110

Like
0Likes
Like

Posted 27 March 2010 - 08:13 AM

So, I tried running this (ArKano22's SSAO) in OpenGL since I'm using Cg for my shaders, and I'm not getting the same result. Anyone know why this might be, or things I can check?

On the left is what I get in Direct3D9, and on the right is OpenGL

EDIT:

I checked the positions they each recreate and OpenGL's doesn't look like Direct3D 9's, so I looked at the texture offset
I'm using:
float GetRandom(float2 uv){    return tex2D(randomTex, screenSize * uv / 32.0f).x;}float2 Rotate(float2 c, float ang){    const float cosAng = cos(ang);    const float sinAng = sin(ang);      return float2(cosAng*c.x-sinAng*c.y, sinAng*c.x+cosAng*c.y);}float offset	 = sampleRadius/max(40.0,p.z);float2 texOffset = Rotate(vec[4], GetRandom(IN.uv)*8.0)*offset;

It's not the most recent version, but with the limited testing I did, it seemed to be slightly slower. Regardless, here are my results with Direct3D 9 (which works) and OpenGL (which doesn't)

I'm thinking this has something to do with it, but I'm not sure if this is the root of the problem. Any thoughts?

[Edited by - AgentSnoop on March 27, 2010 7:13:39 PM]

### #202DudeMiester  Members   -  Reputation: 156

Like
0Likes
Like

Posted 28 March 2010 - 08:16 AM

Perhaps it's because in OpenGL z values will have the opposite sign (ex: p.z)?

### #203AgentSnoop  Members   -  Reputation: 110

Like
0Likes
Like

Posted 28 March 2010 - 09:44 AM

Quote:
 Original post by DudeMiesterPerhaps it's because in OpenGL z values will have the opposite sign (ex: p.z)?

I tried messing with stuff like that and the texture coordinates. Already, I'm setting opengl to use 1.0 - uv.y where Direct3d 9 uses just uv.y.

When I just output the random texture to the screen, they are more or less the same.. it seems like the opengl one is doing bilinear filtering for some reason even though it's set to do nearest. The only way I can set it to nearest seems to be when I set everything to nearest.

Anyway, It's at the point when I do Rotate where things really start being different. When I look at the direct3d 9 screen, it's mainly red dots with some black and occasionally some green dots. When I look at the opengl version, it's mainly black with some red dots (maybe a few green dots too, but you can't really tell)

If I do max(Rotate(...), 0), I get more red dots on opengl (still looks darker because of the bilinear filtering)

Any thoughts?

EDIT: So, it seems if I just skip the rotate part, I don't get a messed up screen in OpenGL. Also, I don't notice too much a difference compared to with the Rotate function with Direct3D 9.

[Edited by - AgentSnoop on March 29, 2010 5:44:30 PM]

### #204DudeMiester  Members   -  Reputation: 156

Like
0Likes
Like

Posted 31 March 2010 - 04:50 PM

I actually see very little difference between the two in terms of the visual patterns. It may just be an effect of the bilinear filtering.

### #205martinsh  Members   -  Reputation: 136

Like
0Likes
Like

Posted 10 April 2010 - 04:20 AM

I didnt wanted to create another SSAO topic, so I am posting to the recent one.
This is a default AO technique that uses only depth texture, so nothing new here, but my part in this is the sample gathering - instead of box blur, I aligned them circularly thus making the occlusion softer and more natural. I also used high-passed luminance texture to discard AO on the highlighted areas, as in real life AO is noticeable only in shadows.

results (animated gifs):

Quote:
 uniform sampler2D DepthTexture;uniform sampler2D RenderedTexture;uniform sampler2D LuminanceTexture;uniform float RenderedTextureWidth;uniform float RenderedTextureHeight;#define PI 3.14159265float width = RenderedTextureWidth; //texture widthfloat height = RenderedTextureHeight; //texture heightfloat near = 1.0; //Z-nearfloat far = 1000.0; //Z-farint samples = 3; //samples on the each ring (3-7)int rings = 3; //ring count (2-8)vec2 texCoord = gl_TexCoord[0].st;vec2 rand(in vec2 coord) //generating random noise{ float noiseX = (fract(sin(dot(coord ,vec2(12.9898,78.233))) * 43758.5453)); float noiseY = (fract(sin(dot(coord ,vec2(12.9898,78.233)*2.0)) * 43758.5453)); return vec2(noiseX,noiseY)*0.004;}float readDepth(in vec2 coord) { return (2.0 * near) / (far + near - texture2D(DepthTexture, coord ).x * (far-near)); }float compareDepths( in float depth1, in float depth2 ){ float aoCap = 1.0; float aoMultiplier = 100.0; float depthTolerance = 0.0000; float aorange = 60.0;// units in space the AO effect extends to (this gets divided by the camera far range float diff = sqrt(clamp(1.0-(depth1-depth2) / (aorange/(far-near)),0.0,1.0)); float ao = min(aoCap,max(0.0,depth1-depth2-depthTolerance) * aoMultiplier) * diff; return ao;}void main(void){ float depth = readDepth(texCoord); float d; float aspect = width/height; vec2 noise = rand(texCoord); float w = (1.0 / width)/clamp(depth,0.05,1.0)+(noise.x*(1.0-noise.x)); float h = (1.0 / height)/clamp(depth,0.05,1.0)+(noise.y*(1.0-noise.y)); float pw; float ph; float ao; float s; for (int i = -rings ; i < rings; i += 1) { for (int j = -samples ; j < samples; j += 1) { float step = PI*2.0 / float(samples*i); pw = (cos(float(j)*step)*float(i)); ph = (sin(float(j)*step)*float(i))*aspect; d = readDepth( vec2(texCoord.s+pw*w,texCoord.t+ph*h)); ao += compareDepths(depth,d); s += 1.0; } } ao /= s; ao = 1.0-ao; vec3 color = texture2D(RenderedTexture,texCoord).rgb; vec3 luminance = texture2D(LuminanceTexture,texCoord).rgb; vec3 white = vec3(1.0,1.0,1.0); vec3 black = vec3(0.0,0.0,0.0); vec3 treshold = vec3(0.2,0.2,0.2); luminance = clamp(max(black,luminance-treshold)+max(black,luminance-treshold)+max(black,luminance-treshold),0.0,1.0); gl_FragColor = vec4(color*mix(vec3(ao,ao,ao),white,luminance),1.0);}

[Edited by - martinsh on April 10, 2010 5:20:21 PM]

### #206ArKano22  Members   -  Reputation: 646

Like
0Likes
Like

Posted 10 April 2010 - 09:19 AM

@Martinsh

that is one of the best depth based methods i´ve seen. The images with shadows+ao are impressive! well done!

### #207martinsh  Members   -  Reputation: 136

Like
0Likes
Like

Posted 11 April 2010 - 03:05 AM

Thank You ArKano22, yeah I am pretty amazed how sample gathering make a noticeable visual difference.

### #208Prune  Members   -  Reputation: 223

Like
0Likes
Like

Posted 12 April 2010 - 10:47 AM

Quote:
 Original post by martinshI also used high-passed luminance texture to discard AO on the highlighted areas

Surely you mean bright-passed... A high-pass is a frequency-domain operation ;)

### #209swiftcoder  Senior Moderators   -  Reputation: 17689

Like
1Likes
Like

Posted 12 April 2010 - 11:18 AM

Quote:
Original post by Prune
Quote:
 I also used high-passed luminance texture to discard AO on the highlighted areas
Surely you mean bright-passed... A high-pass is a frequency-domain operation ;)
Last I checked, light had a frequency too - regardless, the term is in common use with regards to image filters.

Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]

### #210martinsh  Members   -  Reputation: 136

Like
0Likes
Like

Posted 12 April 2010 - 02:07 PM

Heh, yeah well I work also with sound and it seemed a quite appropriate to use high-pass instead of bright-pass :), anyway the principle is the same.

### #211Prune  Members   -  Reputation: 223

Like
0Likes
Like

Posted 12 April 2010 - 08:16 PM

Quote:
 Original post by swiftcoderLast I checked, light had a frequency too

With that usage, the images would be blue-violet hued ;D

In all seriousness, I was just joking around of course

### #212ArKano22  Members   -  Reputation: 646

Like
0Likes
Like

Posted 19 April 2010 - 02:58 AM

Another update to the algorithm. It feels wrong contributing to the lenght of an already big thread, but this could be useful to some.

RM Project

Two changes:
-2D sampling has been left behind. The sampling method is now interleaved in object space, like the original implementation by Crytek. This allows to use only 8 samples per pixel, and stil obtain very good quality. This also makes it faster (sponza +130fps, while the old method ran at 100 fps.)

-I´ve added a "self-occlusion" artist variable. It is possible to achieve the traditional grayish look with brightened edges, but without halo artifacts, and you can control the amount of self occlusion added to the result. In the image below, you can see the Hebe model with different self-occlusion values. A bit of self-occlusion usually adds contrast and makes the effect more noticeable.

This is achieved by initializating the occlusion with some positive value (the self occlusion value), and modifying the the occlusion function so that it not only darkens creases (adds occlusion), but also brightens edges (subtracts occlusion).

-Added cubemap lighting and a texture to achieve a more in-game look.

[Edited by - ArKano22 on April 19, 2010 9:58:14 AM]

### #213REF_Cracker  Members   -  Reputation: 894

Like
0Likes
Like

Posted 21 April 2010 - 12:12 PM

ArKano22:

First off thanks for sharing your method... Now

I'm looking over the code and have a problem with a few things... could you explain the reasoning?

First off I see that vec[8] is initialized to the corners of a cube with the dimensions of (2,2,2). And then you do some pseudo random sampling in those directions. The problem is you are sampling out in 2D so your +1 or -1 in Z of those values never matters. In fact you're only passing the vec2 value of this to your doAmbientOcclusion function anyway. Did I miss something here? Wouldn't you be better off sampling in an 8 way circle if this is the case?

Now for the random length it seems you're sampling a texture with 4 values. That work out to always be one of (1.0,0.72,0.46,0.23) .. is that correct?

Thanks!

### #214ArKano22  Members   -  Reputation: 646

Like
0Likes
Like

Posted 21 April 2010 - 12:26 PM

Quote:
 Original post by REF_CrackerArKano22:First off thanks for sharing your method... NowI'm looking over the code and have a problem with a few things... could you explain the reasoning?First off I see that vec[8] is initialized to the corners of a cube with the dimensions of (2,2,2). And then you do some pseudo random sampling in those directions. The problem is you are sampling out in 2D so your +1 or -1 in Z of those values never matters. In fact you're only passing the vec2 value of this to your doAmbientOcclusion function anyway. Did I miss something here? Wouldn't you be better off sampling in an 8 way circle if this is the case?Now for the random length it seems you're sampling a texture with 4 values. That work out to always be one of (1.0,0.72,0.46,0.23) .. is that correct?Thanks!

Hello REF:

I´m using 3d samples, the corners of a 2x2x2 cube as you say. Then by multiplying each sample vector with the view transform, i rotate it along with the view. Then i use them as 2d points. The Z location does not matter anymore, but because of the view transform, the x,y values are not the same. So the Z value of each sample is needed to calculate the new x,y values passed to the occlusion function.

This sampling method results in a completely view-independent sampling pattern, because the pattern moves with the camera , and each pixel samples always the same zone, no matter how you move around the scene. This is the original ssao sampling method. 100% 2D sampling was developed after Crysis.

About the sampling lengths, yes, they are always these values. That is because i´m using interleaved sampling, so in a 2x2 area, each pixel samples a different lenght, then the 4 lengths are averaged together using a blur filter. This is also the original method, except Crysis used 4x4 interleaved sampling instead of 2x2 (i think, not sure about this).

So in fact, this method is Crytek´s with a different occlusion function that helps with halos and improves accuracy, nothing more.

EDIT: btw, i forgot to scale the samples with the screen size:
doAmbientOcclusion(i.uv,coord, p, n );
should be
doAmbientOcclusion(i.uv,coord*g_inv_screen_size*1000.0, p, n );

I will correct that as soon as possible.

### #215Prune  Members   -  Reputation: 223

Like
0Likes
Like

Posted 21 April 2010 - 12:40 PM

I can't really make out any quality difference between this and the earlier approach. Great result!

### #216ArKano22  Members   -  Reputation: 646

Like
0Likes
Like

Posted 21 April 2010 - 12:51 PM

Quote:
 Original post by PruneI can't really make out any quality difference between this and the earlier approach. Great result!

The quality is more or less the same. Maybe a bit more contrast with the darken/brighten scheme.

Ths difference is in speed, because the sampling is less random, and increases cache coherency. the other approach used at least 16 samples while this can use 8 or 14 for the same quality.

### #217martinsh  Members   -  Reputation: 136

Like
0Likes
Like

Posted 24 April 2010 - 01:00 AM

Awesome, thanks for sharing.
that is some clever blurring.
I ended up using this line instead of texture.

Quote:
 float getRandom(in float2 uv){ return ((frac(uv.x * (g_screen_size.x/2.0))*0.25)+(frac(uv.y*(g_screen_size.y/2.0))*0.75));}

Looks the same, but you dont need external random texture and I gained like 2 fps :D

### #218ArKano22  Members   -  Reputation: 646

Like
0Likes
Like

Posted 24 April 2010 - 11:43 AM

Quote:
Original post by martinsh
Awesome, thanks for sharing.
that is some clever blurring.
I ended up using this line instead of texture.

Quote:
 float getRandom(in float2 uv){ return ((frac(uv.x * (g_screen_size.x/2.0))*0.25)+(frac(uv.y*(g_screen_size.y/2.0))*0.75));}

Looks the same, but you dont need external random texture and I gained like 2 fps :D

it should be
return ((frac(uv.x * (g_screen_size.x/2.0))*0.25)+(frac(uv.y*(g_screen_size.y/2.0))*0.5));

neat trick! :D

### #219krulspeld  Members   -  Reputation: 122

Like
0Likes
Like

Posted 26 April 2010 - 09:13 AM

@martinish

There's an error in the code you posted. With rings=3 and samples=3 it generates the following pattern consisting of 36 samples. Samples on the inner two rings are all double.

You probably want to do something like this:
int ringsamples;for (int i = 1; i <= rings; i += 1){   ringsamples = i * samples;   for (int j = 0 ; j < ringsamples ; j += 1)   {      float step = PI*2.0 / float(ringsamples);      pw = (cos(float(j)*step)*float(i));      ph = (sin(float(j)*step)*float(i))*aspect;      d = readDepth( vec2(texCoord.s+pw*w,texCoord.t+ph*h));      ao += compareDepths(depth,d);      s += 1.0;   }}`

For for rings=3 and samples=3 this results in only 18 samples.

### #220REF_Cracker  Members   -  Reputation: 894

Like
0Likes
Like

Posted 26 April 2010 - 10:59 PM

Results - Depth Buffer Only
Sampling 8 points per pixel.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS