Jump to content
  • Advertisement
Sign in to follow this  

OpenGL Screen Space Ambient Occlussion problem

This topic is 3533 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! I want to desarrollate a SSAO Pass for my project. Its a Visualization of a buiding in 3d. I want to desarrollate it because i want to give more realism to the scene. This project is in Jmonkey engine but uses opengl. I have been following the others posts of SSAO but i'm stuck. I'ts based of the Iñigo Quilez implementation. I created a Depth Shader that create a texture with the z depth in View Space of the objects. I "codificated" the value in a vec4 to use 32 bits of precision instead of 8 bits that i would get with just using one channel. I used this in a TextureRenderer to obtain DepthTexture. This is the Vertex and the Fragment Shaders of Depth

varying float posz;

void main()


	vec4 viewPos = gl_ModelViewMatrix * gl_Vertex;
	posz = -viewPos.z;
	gl_TexCoord[0] = gl_MultiTexCoord0;

	gl_Position     = gl_ModelViewProjectionMatrix * gl_Vertex;



varying float posz;

uniform float zfar;

vec4 packFloatToVec4i(const float value)
	const vec4 bitSh = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
	const vec4 bitMsk = vec4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
	vec4 res = fract(value * bitSh);
	res -= res.xxyz * bitMsk;
	return res;

void main ()

	float depth = posz/zfar;
	gl_FragColor= packFloatToVec4i(depth);


I applied the ssaoShader to a Fullscreen quad. For the shader i need the position of top right vertex of the far plane. I use this to obtain the ViewDirection to the vertex (and use this interpolated in fragment ) I think that the problem is arround here. I obtain the point by :
// JME code

float farY = (float)Math.tan(Math.PI / 3.0 / 2.0) * cam.getFrustumFar();
float farX = farY * 1.33333333f;
return new Vector3f(farX, farY,cam.getFrustumFar());

PI / 3.0 / 2.0 = FOV In the example im relying on they use this fov value (its a XNA SSAO implementation http://www.codeplex.com/XNACommunity/Wiki/View.aspx?title=SSAO&referringTitle=Ejemplos ).

//Basically I obtain the view direction by:
uniform vec3 cornerFustrum;

varying vec3 direccioVisio;

void main(void)

	vec4 pos = vec4 ( gl_ModelViewProjectionMatrix * gl_Vertex );
	gl_TexCoord[0] = gl_MultiTexCoord0;

	gl_Position= gl_ModelViewProjectionMatrix * gl_Vertex;

	vec3 corner = vec3(-cornerFustrum.x * gl_Position.x,

			cornerFustrum.y * gl_Position.y, cornerFustrum.z);

	direccioVisio =  corner;


uniform sampler2D depthtex;	

uniform sampler2D normaltex;  // texture with random normals
uniform float escala;
uniform float radi;
 //uniform float resx,resy;

varying vec3 direccioVisio;	//ViewDir

float unpackFloatFromVec4i(const vec4 value)
	const vec4 bitSh = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);
	return(dot(value, bitSh));

void main()


	// array amb 16 punts per calcular els samplers. Array with the randomized points
	vec3 samples[16];

	samples[0]=vec3(0.355512, 	-0.709318, 	-0.102371 );

	samples[1]=vec3(0.534186, 	0.71511, 	-0.115167 );

	samples[2]= vec3(-0.87866, 	0.157139, 	-0.115167 );

	samples[3]= vec3(0.140679, 	-0.475516, 	-0.0639818 );

	samples[4]= vec3(-0.0796121, 	0.158842, 	-0.677075);

	samples[5]= vec3(-0.0759516, 	-0.101676, 	-0.483625);

	samples[6]= vec3(0.12493, 	-0.0223423,	-0.483625);

	samples[7]= vec3(-0.0720074, 	0.243395, 	-0.967251);

	samples[8]= vec3(-0.207641, 	0.414286, 	0.187755 );

	samples[9]= vec3(-0.277332, 	-0.371262, 	0.187755 );

	samples[10]= vec3(0.63864, 	-0.114214, 	0.262857);

	samples[11]= vec3(-0.184051, 	0.622119, 	0.262857);

	samples[12]= vec3(0.110007, 	-0.219486, 	0.435574 );

	samples[13]= vec3(0.235085, 	0.314707, 	0.696918 );

	samples[14]= vec3(-0.290012, 	0.0518654, 	0.522688 );

	samples[15]= vec3(0.0975089, 	-0.329594, 	0.609803 );


//vec3 dv = normalize(direccioVisio);
 // in XNA example they normalize the viewDir in others say DONT NORMALIZE!...
       //This is the depth
	float prof = unpackFloatFromVec4i(texture2D( depthtex, gl_TexCoord[0].st)); // busquem la profunditat
	// ViewPos = depth * ViewDirection
	vec3 posicio = prof*direccioVisio; // calculem la posicio en ViewSpace del punt que tractem

	vec3 normalrandom = texture2D( normaltex, gl_TexCoord[0].st*200.0).rgb; //carreguem una normal aleatoria


	int n;

	float color = 0.0;

	for (n=0;n<16;n++){

		//aki ve el calcul de random normal per millorar el resultat

		vec3 vector = reflect(samples[n].xyz,normalrandom)* radi;

		//obtenim un punt a samplejar	. This is the sample point
		vec4 sample = vec4(posicio+vector,1.0); //punt a samplejar falta cambiar coordenades


		vec4 posEI; // posicio Espai-Imatge Sample point in Clip Space to obtain the texture coords
		posEI = sample*gl_ProjectionMatrix; //obtenim la posicio en lespai Imatge
		//les coord del sample son les coordenades del punt / resolucio pantalla ?
		//vec2 TexCoordpunt =0.5* (posEI.xy) ; // calculem les uv 
		vec2 TexCoordpunt = posEI.xy;

		float profpunt= unpackFloatFromVec4i(texture2D( depthtex, TexCoordpunt )); 	// depth of the sample	


		} else {

			float occlusio= escala*max(profpunt- prof,0.0); // formula de la occlusio

			color += 1.0/ (1.0+ occlusio*occlusio*0.1); 



	vec4 final = vec4(color/16.0,color/16.0,color/16.0,1.0); // dividim el color pel num de samplers

	gl_FragColor = final;


I have 2 critical problems. 1 I think i obtain the View Direction bad and i obtain a lot of artifacts. 2 I think im using a bad system in Fragment shader when i try to obtain the TextureCoords of the sample point, (to obtain the depth). I have other questions like: Whats the diference to use ViewSpace Coordinates to calculate depth or ClipSpace Coordinates. Thats is my result of the SSAO http://img230.imageshack.us/my.php?image=64726988hh7.png my.php?image=64726988hh7.png the problem its that: it only works in 1 direction, in the others directions i only get artifacts. I will upload a video in youtube to explain me. Thanks to everyone.

Share this post

Link to post
Share on other sites
It's a bit long, so I didn't read everything, but why are you calculating a view vector? SSAO, when you think about it, is a bit like percentage closer filtering. You just test the depths of a few pixels surrounding the one you're currently shading to get an approximate brightness value. The more samples that pass the test, the less occlusion there is at that point and the brighter the pixel. You shouldn't really need to do any fancy determination of sample position; just iterate over each pixel in the frame buffer. It's a view-dependent effect.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!