Ah, of course. And that's why that test with the fixed vector was not correct. But the "error" I was talking about also happens when I render in the normal way (with 16 random rays). Well, the normals are correctly transformed to view-space then. And I understand it doesn't really matter in which direction the rays point. As long as I have multiple rays in all directions, ~50% of them should pass. The other 50% are discarded, or as you do, brought to the right side. However, this percentage depends on the camera rotation in my case.
So far I have this shader:
void main( float2 iTex : TEXCOORD0, float4 iViewDir : TEXCOORD1, float4 iPos : TEXCOORD2, out float4 oColor : COLOR0, uniform sampler2D sceneDepth, uniform sampler2D noiseNormalMap, uniform half3 camPos = V3_CAM_POS , uniform float4x4 projMatrix = MPROJ){// Sample sphere// x points to cover a sphere shape that surrounds the pixel center coordinateconst half sampleCount = 16;const half3 sampleSpherePos[16] = { float3(0.527837, -0.085868 ,0.527837) , float3(-0.040088, 0.536087, -0.040088) , float3(-0.670445, -0.179949, -0.670445) , float3(-0.419418, -0.616039, -0.419418) , float3(0.440453, -0.639399, 0.440453) , float3(-0.757088, 0.349334, -0.757088) , float3(0.574619, 0.685879,0.574619) , float3(0.03851, -0.939059, 0.03851) , float3(0.527837, -0.085868 ,0.527837) , float3(-0.040088, 0.536087, -0.040088) , float3(-0.670445, -0.179949, -0.670445) , float3(-0.419418, -0.616039, -0.419418) , float3(0.440453, -0.639399, 0.440453) , float3(-0.757088, 0.349334, -0.757088) , float3(0.574619, 0.685879,0.574619) , float3(0.03851, -0.939059, 0.03851) }; // Get normal and depth float4 pixel = f4tex2D( sceneDepth, iTex.xy ); // float pDepth = pixel.r; // depth as z/w float3 pNrm = pixel.gba; // eye-space normal // Reconstruct the eye/view space position float3 ePos = pDepth * iViewDir.xyz; // Get a normal/noise for the dithering later on half3 noise = h3tex2D( noiseNormalMap, iTex.xy * half2(4, 3) /*scale depends on screen ratio*/ ); noise = noise * 2.0 - 1; // get normal // Let the scale depend on the environment wideness half3 sampleScale = /*SSAO_params.zzw*/ half3( 200,200,1 ) * saturate(pDepth / 5.0 ) // make area smaller if distance less than 5 meters * (1.0 + pDepth / 8.0 ); // make area bigger if distance more than 32 meters float depthRangeScale = /*veiwdist*/500 / sampleScale.z * 0.85; sampleScale.xy *= 1.0 / pDepth; sampleScale.z *= 2.0 / 500 /*PS_NearFarClipDist.y*/; // Loop through samples half pntCount = 0; float occl = 0; for (half i=0; i<sampleCount; i++) { // Generate a random new point around the center point (in eye-space) float3 ray = normalize( reflect( sampleSpherePos.xyz, noise.xyz) ); // Offset ray (in local-space) float3 pnt = ePos + ray * sampleScale.xyz; // Generate new eye-space point // Check if its in the hemisphere. Normal must be in eye-space as well! // Dark_Nebula suggested to bring the ray on the other side, but for testing, // I just discard the points "behind" the normal so far if ( dot( ray, normalize(pNrm) ) > 0.0 ) { // Calculate the texture coordinates for this point, // Bring the coordinate back into clip-space float2 ss = ( pnt.xy / pnt.z ) * float2( projMatrix[0][0], projMatrix[1][1] ); float2 sn = ss * 0.5 + 0.5; // Get depth for this other point float pntDepth = f1tex2D(sceneDepth,sn); // The bigger the difference between the 2 depths, // the less addition. float zDif = 50.0 * max( pDepth - pntDepth, 0.0 ); occl += 1.0/(1.0+zDif*zDif); ++pntCount; // Add 1 to sampleCount. The average is "occl / pntCount" } } // for // Average occl /= pntCount;oColor.rgb = occl;oColor.a = 1;}
Sorry for the huge listing. How to use these other code boxes with syntax highlighting?
Yet again, thanks for being patient with me :)
Rick
[Edited by - spek on December 8, 2007 4:19:30 AM]