Jump to content
  • Advertisement

Boulotaur2024

Member
  • Content Count

    6
  • Joined

  • Last visited

Community Reputation

173 Neutral

About Boulotaur2024

  • Rank
    Newbie
  1. So... Correct me if I'm wrong but that would mean that I am already in camera space the moment I'm applying my SSAO because I'm still running the same code : float3 reconstructCSPosition(float2 S, float z) { //return float3((S.xy * projInfo.xy + projInfo.zw) * z, z); // Original code... doesn't work : AO is barely visible return float3(S, z); // Desperate code... seems to work ? (see attached image) } ... That literally doesn't reconstruct anything at all obviously -and should I add I still totally lack the blue component in my normals- But the result seems correct (see below).   Case solved ?  I'm confused because I really thought post-processing effects like SSAO necessarily originated from screen-space and the shader code explicitly wanted a conversion from screen-space to camera-space... But oh well... It seems to run well without any of that.   I'm sorry if the thread is a bit of a mess... It's hard to explain everything properly... Especially when you're confused.   Thanks MJP
  2. Good observation. I can't debug the shader unfortunately. PIX won't work in combination with the injector I'm working with. (Well I could still output colors according to values in the shader...). But I read HLSL default behaviour was to treat matrices column-major unless specified otherwise by a dedicated pragma http://fgiesen.wordpress.com/2012/02/12/row-major-vs-column-major-row-vectors-vs-column-vectors/#comment-4318   This is really weird because out of desperation I tried the following (see relevant comment in reconstructCSPosition function) : // ------------------- Original code comment about how to build the projection vector ("projInfo") --------------------------- // vec4(-2.0f / (width*P[0][0]), // -2.0f / (height*P[1][1]), // (1.0f - P[0][2]) / P[0][0], // (1.0f + P[1][2]) / P[1][1]) // where P is the projection matrix that maps camera space points // to [-1, 1] x [-1, 1]. That is, GCamera::getProjectUnit(). // ------------------- My attempt at building the projection vector ("projInfo") described above on the CPU ------------------ // width = 1024; // viewport width // height = 768; // viewport height // D3DXMatrixPerspectiveFovLH(&matProjection, (float)D3DXToRadian(90), (FLOAT)width / (FLOAT)height, (FLOAT)1.0, (FLOAT)70.0); // D3DXMatrixInverse(&matProjection, NULL, &matProjection); // D3DXVECTOR4 projInfo; // projInfo.x = -2.0f / ((float)width*matProjection._11); // projInfo.y = -2.0f / ((float)height*matProjection._22), // projInfo.z = ((1.0f - matProjection._13) / matProjection._11) + projInfo.x * 0.5f; // projInfo.w = ((1.0f + matProjection._23) / matProjection._22) + projInfo.y * 0.5f; // HRESULT hr = effect->SetMatrix(matProjectionHandle, &matProjection); // hr = effect->SetVector(projInfoHandle, &projInfo); // ------------------- HLSL function from SAO shader to go from screenspace to camera-space ----------------------------------- float4 projInfo; static const float nearZ = 1.0; // Arbitrary values : I can't retrieve the exact values from the game static const float farZ = 70.0; float LinearizeDepth(float depth) { return rcp(depth * ((farZ - nearZ) / (-farZ * nearZ)) + farZ / (farZ * nearZ)); } /** Reconstruct camera-space P.xyz from screen-space S = (x, y) in pixels and camera-space z < 0. Assumes that the upper-left pixel center is at (0.5, 0.5) [but that need not be the location at which the sample tap was placed!] Costs 3 MADD. Error is on the order of 10^3 at the far plane, partly due to z precision. */ float3 reconstructCSPosition(float2 S, float z) { //return float3((S.xy * projInfo.xy + projInfo.zw) * z, z); // Original code... doesn't work : AO is barely visible return float3(S, z); // Desperate code... seems to work ? (see attached image) } /** Read the camera-space position of the point at screen-space pixel ssP */ float3 getPosition(float2 ssP) { float3 P; P.z = LinearizeDepth(tex2D(depthSampler, ssP).r); // retrieves the post-projection depth in my INTZ texture // Offset to pixel center P = reconstructCSPosition(float2(ssP) + float2(0.5, 0.5), P.z); return P; } And it seems to give decent results minus the self occlusions issues (see AO output and normals attached below)... a lot better than my previous efforts at least although I've been told every edge should have at least some blue in them and there is none at all in my normals (most likely because there is no position reconstruction in my "desperate code").   My brain is melting
  3. So I've been trying to implement Scalable Ambient Obscurance for a few weeks now but I'm still horribly stuck at the 'getting the right coordinates/space' part. I can't believe how hard it is to figure out. My fault probably. But as an excuse, my setup is a bit unusual since I'm not working with some kind of ready-made game engine but with a dx9 injector that... injects SSAO into dx9 games. It's already working nicely except for all these advanced SSAO algorithms (like Scalable Ambient Obscurance) that specifically need camera-space/view-space positions.   Since I can't really retrieve the original projection matrix from the game I'm hooking, what I do for now is to manually create it on the CPU then invert it :   // create a projection matrix width = 1024; // viewport width height = 768; // viewport height D3DXMatrixPerspectiveFovLH(&matProjectionInv, (float)D3DXToRadian(90), (FLOAT)width / (FLOAT)height, 0.1, 100); D3DXMatrixInverse(&matProjectionInv, NULL, &matProjectionInv); Then once in my SSAO shader I naturally try to reconstruct position from depth using the above inverted projection matrix (yes this is a snippet from Mjp's blog) :   extern const float4x4 matProjectionInv; struct VSOUT { float4 vertPos : POSITION0; float2 UVCoord : TEXCOORD0; }; struct VSIN { float4 vertPos : POSITION0; float2 UVCoord : TEXCOORD0; }; VSOUT FrameVS(VSIN IN) { VSOUT OUT = (VSOUT)0.0f; OUT.vertPos = IN.vertPos; OUT.UVCoord = IN.UVCoord; return OUT; } float3 VSPositionFromDepth(float2 vTexCoord) { // Get the depth value for this pixel float z = tex2D(depthSampler, vTexCoord).r; // Get x/w and y/w from the viewport position float x = vTexCoord.x * 2 - 1; float y = (1 - vTexCoord.y) * 2 - 1; float4 vProjectedPos = float4(x, y, z, 1.0f); // Transform by the inverse projection matrix float4 vPositionVS = mul(vProjectedPos, matProjectionInv); // Divide by w to get the view-space position return vPositionVS.xyz / vPositionVS.w; } float4 SSAOCalculate(VSOUT IN) : COLOR0 { float3 pos = VSPositionFromDepth(IN.UVCoord); //return float4(pos, 1.0); // See Attached image 1 float3 n_C = reconstructCSFaceNormal(pos); // normalize(cross(ddy(pos), ddx(pos))); return float4(n_C, 1.0); // See Attached image 2 // Truncated for readability reasons }   First attachment is the 'position map' (I guess that's the name ?) and second is the normal map. Obviously the normals are very wrong. I've been told they looked like as if they were world space normals while I'm desperately trying to get camera-space normals (they are required for SAO to work properly) I should mention that I'm using a hardware depth buffer (INTZ texture) which I'm directly reading from. According to Mjp "a hardware depth buffer will store the post-projection Z value divided by the post-projection W value" (I'm not sure I completely understand what this means by the way).   So is the above reconstruction incomplete ? Is my 'fake' Projection matrix sufficient/correct ? NB : the full project will be available on Github once working   [attachment=23617:screenshot_2014-09-15_08-48-01_0.png][attachment=23618:screenshot_2014-09-15_08-48-16_1.png]
  4. Ok I suspected the answer would be 'no' but I wanted to give it a shot anyway. *sigh*    Thanks for the detailed answers ! I wish I was as comfortable as you with vector maths   EDIT : if a projection matrix is absolutely needed to go from screen-space to camera-space, wouldn't I be able to build it myself on the CPU before passing it to my shader (with arbitrary nearZ/farZ of course) ? Like so : http://stackoverflow.com/posts/18406650/revisions     Ok. Disregard my previous question I had missed that part
  5. OK I know it sounds really stupid but... do you know if conversion from screen-space to camera-space coordinates is possible without using any matrix transformation at all ?   I'll explain a bit why I'm asking this. Basically I'm trying to mod a certain dx9 game by hooking it and applying various effects on top of it. I've got a few SSAO implementations working nicely already. But I noticed some of these SSAO shaders required to have camera space positions at hand (more specifically I'm trying to port Scalable Ambient Obscurance to DX9) and they most certainly don't work properly without them.   Oh and... retrieving the game original projection matrix is not possible in my case :/   So all I have is screen space position from my very simple Vertex shader : VSOUT FrameVS(VSIN IN) { VSOUT OUT = (VSOUT)0.0f; OUT.vertPos = IN.vertPos; OUT.UVCoord = IN.UVCoord; return OUT; } Which is certainly insufficient to convert from screen-space to camera-space/view-space position, right ?   Now something that sill puzzles me to this day, and you're going to laugh at me because I copy/pasted some magic code that works -to some extent- but that I can't really understand : const float fovy = 40.0 * 3.14159265 / 180.0; // 40 deg in radian const float invFocalLenX = tan(fovy * 0.5) * width / height; const float invFocalLenY = tan(fovy * 0.5); vec3 uv_to_eye(vec2 uv, float eye_z) { uv = (uv * vec2(2.0, -2.0) - vec2(1.0, -1.0)); return vec3(uv * vec2(invFocalLenX, invFocalLenY) * eye_z, eye_z); // get eye position } vec3 fetch_eye_pos(vec2 uv) { float z = texture2D(tex1, uv).a; // Depth/Normal buffer return uv_to_eye(uv, z); } Correct me if I'm wrong but from screen-space this code should get me back to eye-space position (I guess it's the same as camera-space position right ?)   ... And it doesn't use any matrix transformation at all... ... And It does work at least for my HBAO shader. So I was mostly happy copy/pasting this without having to understand how the magic happens... but now that it doesn't work at all with SAO (Scalable Ambient Obscurance) I realize I'm mostly clueless about how all these things work and what's the missing piece of the puzzle for that matter   Sorry for sounding so ignorant. I am  
  6. (Wish I could bump an old thread because I'm having the same issue as someone but I can't so let's start a new thread)   I adapted and modified some SSAO code to my engine (well it's an injector that adds effects to existing game engines actually). It works well but I noticed some strange edges artifacts. So I decided to have a closer look at it and output the normals only :      This is a portion of a 1680x1050 (my native res) screenshot. Notice how aliased the edges are ? Somehow it looks as if my depth buffer was output at half the resolution and then upscaled to my native res. (the backbuffer is fine and aliasing free)   I think this is the same issue shown here. Yet I use the method described by MJP in that very same topic to generate normals directly from hardware depth buffer since I'm able to reconstruct view space position from my depth. Could it be that the ddx/ddy operators (found in ssao_Main()) generate these artifacts ? (I'm a AMD user) extern float FOV = 75; static float2 rcpres = PIXEL_SIZE // (1.0/Width / 1.0/Height) static float aspect = rcpres.y/rcpres.x; static const float nearZ = 1; static const float farZ = 1000; static const float2 g_InvFocalLen = { tan(0.5f*radians(FOV)) / rcpres.y * rcpres.x, tan(0.5f*radians(FOV)) }; static const float depthRange = nearZ-farZ; // Textures/Samplers definitions omitted for clarity struct VSOUT { float4 vertPos : POSITION0; float2 UVCoord : TEXCOORD0; }; struct VSIN { float4 vertPos : POSITION0; float2 UVCoord : TEXCOORD0; }; VSOUT FrameVS(VSIN IN) { VSOUT OUT; float4 pos=float4(IN.vertPos.x, IN.vertPos.y, IN.vertPos.z, 1.0f); OUT.vertPos=pos; float2 coord=float2(IN.UVCoord.x, IN.UVCoord.y); OUT.UVCoord=coord; return OUT; } float readDepth(in float2 coord : TEXCOORD0) { float posZ = tex2D(depthSampler, coord).r; // Depth is stored in the red component (INTZ) return (2.0f * nearZ) / (nearZ + farZ - posZ * (farZ - nearZ)); // Get eye_z } float3 getPosition(in float2 uv : TEXCOORD0, in float eye_z : POSITION0) { uv = (uv * float2(2.0, -2.0) - float2(1.0, -1.0)); float3 pos = float3(uv * g_InvFocalLen * eye_z, eye_z ); return pos; } float4 ssao_Main(VSOUT IN) : COLOR0 { float depth = readDepth(IN.UVCoord); float3 pos = getPosition(IN.UVCoord, depth); float3 norm = cross(normalize(ddx(pos)), normalize(ddy(pos))); return float4(norm, 1.0); // Output normals } technique t0 { pass p0 { VertexShader = compile vs_3_0 FrameVS(); PixelShader = compile ps_3_0 ssao_Main(); } } The depthSampler has its MinFilter set to POINT. I tried to change it to LINEAR out of curiosity but it doesn't change anything   While searching around the site I found a post explaining that "after converting a z-buffer depth into a world position, then you will run into continuity problems along edges, simply because of the severe change in Z.". Could it be because of that ? Any way to work around that ?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!