Jump to content
  • Advertisement

HaroldReyiz

Member
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

104 Neutral

About HaroldReyiz

  • Rank
    Newbie
  1. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

    But all geometry is composed of triangles. So I'm confused. How does the hardware do the culling ? IIRC it checks the surface normal of each triangle (interpolated between its 3 vertices) and culls the back facing triangles by default. So we can conceptually think that this is done on hardware : for( ; triangle < numberOfTriangles ) { if( triangle.surfaceNormal.z <= 0.0 ) { triangle.render() ; } } Or can we ? I think I'm missing some fundamentals here. Also I'm getting off topic I know.
  2. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

    But we already know that the front facing visible objects have negative z coordinates in view space (towards the camera). If wasn't so, they wouldn't be drawn in the first place.
  3. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

    Lines are still there (actually the lines disappeared in the area from the camera to the skull. But for the rest of the scene they were still there. Also when I changed the camera orientation sometimes lines still appeared in front of the camera. Long story short still incorrect).   Also I tried the clamping address mode as #Include Graphics suggested but the result is still incorrect.   I can upload the source code if anyone wants to tinker with it.     Can you explain this a little more to me ?
  4. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

      Yes the program uses bilateral filtering to smooth out the ambient map.   I was asking whether you use bilinear to fetch from the normal-depth buffer. that would explain the lines.     Sorry, another derp moment there. SamplerState samNormalDepth { Filter = MIN_MAG_LINEAR_MIP_POINT; // Set a very far depth value if sampling outside of the NormalDepth map // so we do not get false occlusions. AddressU = BORDER; AddressV = BORDER; BorderColor = float4(0.0f, 0.0f, 0.0f, 1e5f); };, [...] float4 normalDepth = gNormalDepthMap.SampleLevel( samNormalDepth, pin.Tex, 0.0f ); This is the sampler state used for both the ssao and the blur phase. Also MIN_MAG_MIP_LINEAR is used in the 1st pass.
  5. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

    Oh sorry I derped a little there and somehow thought of the ambient map. Here's the normal-depth map (rgb = normal, a = depth) :   [attachment=27299:Ssao with normal-depth map.png] [attachment=27300:Ssao with normal-depth map 2.png]  
  6. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

    Okay I commented out the normal vector comparison condition from the if loop : if( /*dot(neighborNormalDepth.xyz, centerNormalDepth.xyz) >= 0.8f &&*/ abs(neighborNormalDepth.a - centerNormalDepth.a) <= 0.2f ) And as you suggested, the vertical lines due to normal discontinuity disappeared. But the horizontal lines (and the accompanying clusters at the sides) still remain.   [attachment=27298:Ssao Blur Fixed.png]   Also do you know why this blurring code was working with the previous setup (64 bit normal-depth map without encoding/decoding) and it's not working right now ? If we assume the encoding/decoding part works (it's not working right now but eventually it will), aren't we doing the same thing as before (64 bit map) essentially ? I didn't stop to think about it before but 36 degrees seems like a big difference and I'm now amazed at why it worked with the previous setup. I'm confused about everything right now.
  7. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

      Yes the program uses bilateral filtering to smooth out the ambient map. As requested I'm posting the correct image (with normal-depth map on the bottom right corner) :   [attachment=27297:Ssao Correct.png]   As you can see both the ssao part and the blur (bilateral blur) works.     I didn't make any changes to the blurring code other than the decoding part of normal-depth map. And also when I disable the blur (commenting out the function call from the cpu side) the image is still incorrect. So there's definitely something wrong with the actual ssao computation part of the program. I'll try what you're suggesting right now though.
  8. HaroldReyiz

    [SSAO] Using a 32 bit normal-depth map

      God I'm such a moron. Okay I fixed that and thanks but unfortunately the result's still incorrect. I'm posting the screenshot (maybe it'll help diagnose what's wrong).   Without the blur : [attachment=27282:Ssao Without Blur.png]   With the blur : [attachment=27283:Ssao With Blur.png]   There are these horizontal lines in the ambient map. Also there are clusters of pixels near the right and left end of these lines (you can see it in both pictures). Any idea why that might be ?       My assembly knowledge is kinda non-existent so I did it the inefficient way. Though I knew about "mad" so no excuses there. Fixed that too. Thanks for all your help Krypt0n.
  9. So I'm having a problem with SSAO currently and decided that it was time to sign up for gamedev.net.   I'm studying the book "Introduction to 3d Game Programming with DirectX 11" by Frank D. Luna. I'm on chapter 22 (ambient occlusion) exercise 5, which asks that you use a 32 bit R8G8B8A8_UNORM buffer for the normal-depth map instead of the 64 bit R16G16B16A16_FLOAT used in the chapter (for bandwidth optimization reasons of course). Anyways I did the coding but the result is incorrect. I'm hoping someone can find the problem.   Here's a full description of the problem : [attachment=27279:Exercise 5.jpg]   There are 3 passes. In the 1st pass, the scene gets drawn to the normal-depth map. Specifically view space normals and depth values for each pixel gets drawn to the normal-depth map. Here's the shader code : VertexOut VS(VertexIn vin) { VertexOut vout; // Transform to view space. vout.PosV = mul(float4(vin.PosL, 1.0f), gWorldView).xyz; vout.NormalV = mul(vin.NormalL, (float3x3)gWorldInvTransposeView); // Transform to homogeneous clip space. vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj); // Output vertex attributes for interpolation across triangle. vout.Tex = mul(float4(vin.Tex, 0.0f, 1.0f), gTexTransform).xy; return vout; } float4 PS(VertexOut pin, uniform bool gAlphaClip) : SV_Target { // Interpolating normal can unnormalize it, so normalize it. pin.NormalV = normalize( pin.NormalV ) ; // Store the normal.x and normal.y values in the RG channels of the normalDepthMap. // normal.z coordinate can later be retrieved with the formula z = -1.0f * sqrt( 1 - ( x * x + y * y ) ). // Convert the normal coordiantes from [-1,1] to [0,1]. float4 normalDepthMap ; normalDepthMap.rg = pin.NormalV.xy * 0.5f + 0.5f ; // Store the depth value in the BA channels of the normalDepthMap. // First normalize the depth to the range [0,1] by some scaling factor (like the far plane depth gFarZ). // Assign this value to the B channel. Then 8 bit-shift to the left and assign the fractional value to the A channel. float depth = pin.PosV.z / gFarZ ; normalDepthMap.ba = float2( depth, frac( 256 * depth ) ) ; return normalDepthMap ; } Then in the 2nd pass, a screen sized quad is drawn to invoke the ssao pixel shader. Ssao pixel shader calculates an ambient access value for each pixel so that it can be used when we actually draw the scene to the back buffer. We use the "ambient map" as the render target, so I'll call the texture that's holding the ambient access values the "ambient map" from now on. The ssao shaders : VertexOut VS(VertexIn vin) { VertexOut vout; // Already in NDC space. vout.PosH = float4(vin.PosL, 1.0f); // We store the index to the frustum corner in the normal x-coord slot. vout.ToFarPlane = gFrustumCorners[vin.ToFarPlaneIndex.x].xyz; // Pass onto pixel shader. vout.Tex = vin.Tex; return vout; } // Determines how much the sample point q occludes the point p as a function // of distZ. float OcclusionFunction(float distZ) { // // If depth(q) is "behind" depth(p), then q cannot occlude p. Moreover, if // depth(q) and depth(p) are sufficiently close, then we also assume q cannot // occlude p because q needs to be in front of p by Epsilon to occlude p. // // We use the following function to determine the occlusion. // // // 1.0 -------------\ // | | \ // | | \ // | | \ // | | \ // | | \ // | | \ // ------|------|-----------|-------------|---------|--> zv // 0 Eps z0 z1 // float occlusion = 0.0f; if(distZ > gSurfaceEpsilon) { float fadeLength = gOcclusionFadeEnd - gOcclusionFadeStart; // Linearly decrease occlusion from 1 to 0 as distZ goes // from gOcclusionFadeStart to gOcclusionFadeEnd. occlusion = saturate( (gOcclusionFadeEnd-distZ)/fadeLength ); } return occlusion; } float4 PS(VertexOut pin, uniform int gSampleCount) : SV_Target { // p -- the point we are computing the ambient occlusion for. // n -- normal vector at p. // q -- a random offset from p. // r -- a potential occluder that might occlude p. // Get viewspace normal and z-coord of this pixel. The tex-coords for // the fullscreen quad we drew are already in uv-space. float4 normalDepth = gNormalDepthMap.SampleLevel( samNormalDepth, pin.Tex, 0.0f ); // Reconstruct the normal vector. Red and Green channels has the x and y values of the normal vector in [0,1]. // Reconstruct the z component by using the formula z = -1.0f * sqrt( 1 - ( x * x + y * y ) ). // Lastly convert the coordinates to the interval [-1,1]. float2 xy = normalDepth.rg ; float3 n ; n.xy = ( xy - 0.5f ) * 2.0f ; n.z = -1.0f * sqrt( 1.0f - ( xy.x * xy.x + xy.y * xy.y ) ) ; // Reconstruct the depth value. Blue and Alpha channels combined have the depth value. // Depth = Blue channel + Alpha Channel 8 bit shifted right (by dividing by 256). // Also multiply by the depth of the far plane to return to the [-1,1] range. float pz = ( normalDepth.b + normalDepth.a / 256.0f ) * pin.ToFarPlane.z ; // // Reconstruct full view space position (x,y,z). // Find t such that p = t*pin.ToFarPlane. // p.z = t*pin.ToFarPlane.z // t = p.z / pin.ToFarPlane.z // float3 p = (pz/pin.ToFarPlane.z)*pin.ToFarPlane; // Extract random vector and map from [0,1] --> [-1, +1]. float3 randVec = 2.0f*gRandomVecMap.SampleLevel(samRandomVec, 4.0f*pin.Tex, 0.0f).rgb - 1.0f; float occlusionSum = 0.0f; // Sample neighboring points about p in the hemisphere oriented by n. [unroll] for(int i = 0; i < gSampleCount; ++i) { // Are offset vectors are fixed and uniformly distributed (so that our offset vectors // do not clump in the same direction). If we reflect them about a random vector // then we get a random uniform distribution of offset vectors. float3 offset = reflect(gOffsetVectors[i].xyz, randVec); // Flip offset vector if it is behind the plane defined by (p, n). float flip = sign( dot(offset, n) ); // Sample a point near p within the occlusion radius. float3 q = p + flip * gOcclusionRadius * offset; // Project q and generate projective tex-coords. float4 projQ = mul(float4(q, 1.0f), gViewToTexSpace); projQ /= projQ.w; // Find the nearest depth value along the ray from the eye to q (this is not // the depth of q, as q is just an arbitrary point near p and might // occupy empty space). To find the nearest depth we look it up in the depthmap. float4 samp = gNormalDepthMap.SampleLevel( samNormalDepth, projQ.xy, 0.0f ) ; float rz = ( samp.b + samp.a / 256.0f ) * pin.ToFarPlane.z ; // Reconstruct full view space position r = (rx,ry,rz). We know r // lies on the ray of q, so there exists a t such that r = t*q. // r.z = t*q.z ==> t = r.z / q.z float3 r = (rz / q.z) * q; // // Test whether r occludes p. // * The product dot(n, normalize(r - p)) measures how much in front // of the plane(p,n) the occluder point r is. The more in front it is, the // more occlusion weight we give it. This also prevents self shadowing where // a point r on an angled plane (p,n) could give a false occlusion since they // have different depth values with respect to the eye. // * The weight of the occlusion is scaled based on how far the occluder is from // the point we are computing the occlusion of. If the occluder r is far away // from p, then it does not occlude it. // float distZ = p.z - r.z; float dp = max(dot(n, normalize(r - p)), 0.0f); float occlusion = dp * OcclusionFunction(distZ); occlusionSum += occlusion; } occlusionSum /= gSampleCount; float access = 1.0f - occlusionSum; // Sharpen the contrast of the SSAO map to make the SSAO affect more dramatic. return saturate(pow(access, 4.0f)); } And lastly, the 3rd pass, which is the blurring phase. Because we take only 14 samples in the 2nd pass, the ambient map is a little fuzzy. Instead of taking enough samples to make it look fine (which hurts performance), a bilateral blur is applied 4 times to make up for the lack of sufficient samples. Here's the blur hlsl code : VertexOut VS(VertexIn vin) { VertexOut vout; // Already in NDC space. vout.PosH = float4(vin.PosL, 1.0f); // Pass onto pixel shader. vout.Tex = vin.Tex; return vout; } float4 PS(VertexOut pin, uniform bool gHorizontalBlur) : SV_Target { float2 texOffset; if(gHorizontalBlur) { texOffset = float2(gTexelWidth, 0.0f); } else { texOffset = float2(0.0f, gTexelHeight); } // The center value always contributes to the sum. float4 color = gWeights[5]*gInputImage.SampleLevel(samInputImage, pin.Tex, 0.0); float totalWeight = gWeights[5]; float4 samp = gNormalDepthMap.SampleLevel( samNormalDepth, pin.Tex, 0.0f ); float4 centerNormalDepth ; centerNormalDepth.xy = 2.0f * ( samp.xy - 0.5f ) ; centerNormalDepth.z = -1.0f * sqrt( 1.0f - ( samp.r * samp.r + samp.g * samp.g ) ) ; centerNormalDepth.w = ( samp.b + samp.a / 256.0f ) * gFarZ ; for(float i = -gBlurRadius; i <=gBlurRadius; ++i) { // We already added in the center weight. if( i == 0 ) continue; float2 tex = pin.Tex + i*texOffset; float4 nSamp = gNormalDepthMap.SampleLevel( samNormalDepth, tex, 0.0f ) ; float4 neighborNormalDepth ; neighborNormalDepth.xy = 2.0f * ( nSamp.xy - 0.5f ) ; neighborNormalDepth.z = -1.0f * sqrt( 1.0f - ( nSamp.r * nSamp.r + nSamp.g * nSamp.g ) ) ; neighborNormalDepth.w = ( nSamp.b + nSamp.a / 256.0f ) * gFarZ ; // // If the center value and neighbor values differ too much (either in // normal or depth), then we assume we are sampling across a discontinuity. // We discard such samples from the blur. // if( dot(neighborNormalDepth.xyz, centerNormalDepth.xyz) >= 0.8f && abs(neighborNormalDepth.a - centerNormalDepth.a) <= 0.2f ) { float weight = gWeights[i+gBlurRadius]; // Add neighbor pixel to blur. color += weight*gInputImage.SampleLevel( samInputImage, tex, 0.0); totalWeight += weight; } } // Compensate for discarded samples by making total weights sum to 1. return color / totalWeight; } I didn't post the cpu side of the code because I think the problem is in one of the fx files (SsaoNormalDepth.fx, Ssao.fx and SsaoBlur.fx respectively). But I can post it (or any other part of the code), if anyone needs it. Specifically, I think that converting data from the 64 bit format to 32 bit in the 1st pass (and doing the reverse in the 2nd and the 3rd passes) is the main problem (the stuff that is explained in the exercise explanation) and I think I'm doing that wrong.   Also when I disable the blur pass to see if the ambient map is correct, I see that it's not. False occlusions everywhere. I'm not saying that the blur code is fine. It's buggy too probably but I'm sure that there's something wrong before the execution comes there.   Anyways this was lengthy. I hope it was clear. If not, I'm open to suggestions. Anyways thanks in advance!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!