kazemi

Members
  • Content count

    12
  • Joined

  • Last visited

Community Reputation

126 Neutral

About kazemi

  • Rank
    Member
  1. Hi My Question is basically how can I implement a cylindrical projection matrix with pixel/vertex shader in Direct3D? I want to achieve the effect of a full 180 degree camera which seems impossible with standard projection. Thanks.
  2. [quote name='Digitalfragment' timestamp='1313235333' post='4848597'] Its easier to calculate your view & projection matricies as per normal, but then edit the [i]projection matrix[/i] with a scale & translate matrix to shift the results onto the appropriate screen. Remember that the results of the final projection matrix yield a clip space coordinate from -1 : 1. You want -1 : 0 to end up on the left screen and 0 : 1 to end up on the right screen. [/quote] I came up with this projection matrix: [source] void Matrix4::perspectiveFovOffCenter(float fov, float aspect, float zn, float zf, float cX, float cY) { float yScale = cotf(fov/2); float xScale = yScale / aspect; float xOffset = -2*cX; float yOffset = -2*cY; _11 = xScale; _12 = 0; _13 = 0; _14 = 0; _21 = 0; _22 = yScale; _23 = 0; _24 = 0; _31 = xOffset; _32 = yOffset; _33 = zf/(zf-zn); _34 = 1; _41 = 0; _42 = 0; _43 = zn*zf/(zn-zf);_44 = 0; }[/source] This seems to work fine except it messes up half of my shaders, and I get all distorted shadows and lighting artifacts on the translated image!
  3. I am trying to use multiple monitors to extend the field of view. To do so, I apply an offset to my view matrix like this: [source] Matrix4 matXYZ; matXYZ.translation(m_position); Matrix4 matRotation; matRotation.setYawPitchRoll(m_yaw, m_pitch, m_roll); Matrix4 matOffset; matOffset.rotationY(offset); Matrix4 matTransformation = matOffset * matRotation * matXYZ; [/source] Now the problem is when I rotate around Y axis, the horizon will not appear on a line (there are discontinuities on the borders). How can I fix this problem? (Attached image shows what I want to achieve!)
  4. SSAO Problem

    [quote name='johnchapman' timestamp='1312978097' post='4847139'] How are you actually generating the sample kernel? As maxest says if there's any regularity to it that is what will be causing the interference pattern (and why increasing the number samples doesn't seem to reduce the effect). [/quote] I have to correct my answer, increasing the number of samples does reduce the artifacts slightly, but makes it super slow to run as well. I think 16 was the recommended value by the developer of the algorithm, and I'm using 24. This is how I generate the random vectors: I create a 4x4 random noise vector: [source language=cpp] int ncomp = 3; int w = 4, h = 4; int length = w * h; float *data = new float[length*ncomp]; for(int i=0; i<length; ++i) { Vector3 v(frand(), frand(), frand()); v.normalize(); data[i*ncomp] = v.x; data[i*ncomp+1] = v.y; data[i*ncomp+2] = v.z; } Texture* pTexture = Texture::create(SurfaceDesc(DXGI_FORMAT_R32G32B32_FLOAT, w, h, 1, 0), 12, data); [/source] and then pass to the shader as gNoiseVecMap. I sample from the noise texture: [source] float2 rotationTC = pIn.pos.xy/4; float3 vRotation = gNoiseVecMap.Sample(gPointSam, rotationTC); [/source] Then I use this vector to create a rotation matrix for each pixel: [source] float h = 1/(1+vRotation.z); float3x3 rotMat = float3x3( h*vRotation.y*vRotation.y + vRotation.z, -h*vRotation.y*vRotation.x, vRotation.x, -h*vRotation.y*vRotation.x, h*vRotation.x*vRotation.x + vRotation.z, vRotation.y, -vRotation.x, -vRotation.y, vRotation.z); [/source] The rotation matrix is then used to rotate a set of vectors with various lengths to create pseudo random vectors, and then are transformed to screen space. [source] float3 vOffset = xyz[i%8]*(offsetScale*=offsetScaleStep); float3 vRotatedOffset = mul(vOffset, rotMat); float3 vRandVector = float3(vRotatedOffset.xy*gProjParams.screenDimension, vRotatedOffset.z*fSceneDepthP*2); [/source]
  5. [quote name='EJH' timestamp='1312991705' post='4847193'] Threads like this really make me question whether GD.net has the most subtle trolls in history: - [u]OP[/u]: "How do I get emails of AAA developers to use my algorithm despite the fact than I cannot show a demo, video, screenshot, or a even provide single numerical comparison. Oh and they have to sign an NDA." - [u]Guy later in thread[/u]: "My engine is superior to Crysis 2 yet I have never released a single game with it nor can I provide a demo. To prove my engine's superiority here is screenshot of some buttons and text rendered on a couple of planes." Can't say this isn't entertaining though. [img]http://public.gamedev.net/public/style_emoticons/default/laugh.gif[/img] [/quote] Very well said! Generally I think the more you learn, and the more knowledge you have, you stop bragging about things you have done.
  6. SSAO Problem

    Hi John, I have tried both changing the kernel radius by decreasing the offset and step size but it didn't do any good, increasing the sample count also doesn't reduce the artifacts. My guess is I'm not retrieving the depth accurately. The edges are not a big issue, the problem for me is those weird looking triangular artifacts in the middle of the screen. Thanks for the tutorial link, I will look into that. [quote name='johnchapman' timestamp='1312813612' post='4846174'] Looking at the screenshot i'd say it's a combination of a large sample kernel radius with a low number of samples; does decreasing the kernel radius/increasing the number of samples improve the problem? The black "edges" are a result of sampling beyond the edges of the depth texture. The result of this will change depending on the texture wrap mode (repeat, clamp to edge, etc.) or you could try clamping your texture coordinated in [0, 1]. As a side note, you might be interested in a tutorial I wrote a while back that extends the SSAO technique you're implementing: [url="http://www.john-chapman.net/content.php?id=8"]http://www.john-chap...ontent.php?id=8[/url] [/quote]
  7. I am trying to integrate the SSAO technique by Vladimir Kajalin described in ShaderX 7 into my game engine. My implementation seems to be working but I noticed there are some weird projective artifacts which is always visible no matter what kind of geometry is being rendered. Here I increased the contrast to make it more visible. Can somebody look at my code and tell me what I'm doing wrong. Code: [source lang="cpp"] const float3 xyz[8] = { float3(-0.5774, -0.5774, -0.5774), float3(-0.5774, -0.5774, 0.5774), float3(-0.5774, 0.5774, -0.5774), float3(-0.5774, 0.5774, 0.5774), float3( 0.5774, -0.5774, -0.5774), float3( 0.5774, -0.5774, 0.5774), float3( 0.5774, 0.5774, -0.5774), float3( 0.5774, 0.5774, 0.5774), }; Texture2D<float3> gNoiseVecMap; struct VS_IN { float3 pos : POSITION; }; struct VS_OUT { float4 pos : SV_POSITION; }; float GetDepth(const float2 uv) { const float z = LoadTexture(gDepthMap, uv, 0); return z; } VS_OUT VS(VS_IN vIn) { VS_OUT vOut; vOut.pos = float4(vIn.pos, 1.0f); return vOut; } float PS(VS_OUT pIn) : SV_Target { float2 rotationTC = pIn.pos.xy/4; float3 vRotation = gNoiseVecMap.Sample(gPointSam, rotationTC); float h = 1/(1+vRotation.z); float3x3 rotMat = float3x3( h*vRotation.y*vRotation.y + vRotation.z, -h*vRotation.y*vRotation.x, vRotation.x, -h*vRotation.y*vRotation.x, h*vRotation.x*vRotation.x + vRotation.z, vRotation.y, -vRotation.x, -vRotation.y, vRotation.z); float fSceneDepthP = GetDepth(pIn.pos.xy); const int nSamplesNum = 24; const float offsetScaleStep = 1 + 2.4 / nSamplesNum; float offsetScale = 0.01; float Accessability = 0; for(int i=0; i<nSamplesNum; ++i) { float3 vOffset = xyz[i%8]*(offsetScale*=offsetScaleStep); float3 vRotatedOffset = mul(vOffset, rotMat); float3 vRandVector = float3(vRotatedOffset.xy*gProjParams.screenDimension, vRotatedOffset.z*fSceneDepthP*2); float3 vSamplePos = float3(pIn.pos.xy, fSceneDepthP) + vRandVector; vSamplePos.xy = clamp(vSamplePos.xy, float2(0, 0), gProjParams.screenDimension-1); float fSceneDepthS = GetDepth(vSamplePos.xy); float fRangeIsInvalid = saturate(((fSceneDepthP-fSceneDepthS)/fSceneDepthS)); Accessability += lerp(fSceneDepthS > vSamplePos.z, 0.5, fRangeIsInvalid); } Accessability = Accessability / nSamplesNum; return saturate(Accessability*(Accessability+1)); } technique11 Standard { pass P0 { SetVertexShader( CompileShader( vs_5_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_5_0, PS() ) ); } }[/source] screenDimension is size of the back buffer (1920x1200 here for example), and gDepthMap is set to a linear depth buffer which is filled with the length of view space position of pixels.
  8. That totally answered my question, thanks!
  9. MJP, Thanks for your answer. I'm trying to bind the buffer again as a depth buffer in another pass. I assume it's not possible to bind a ms depth buffer along with single sampled render targets at the same time. Is there any solution for that?
  10. I'm trying to implement a deferred rendering engine with Direct3D 11, now somewhere in my code I have to resolve a MS Depth buffer so that I can bind it for use in the forward rendering pass where it should be single sampled. I use ResolveSubresource like this: pD3DDeviceContext->ResolveSubresource(destTexture, 0, srcTexture, 0, DXGI_FORMAT_D24_UNORM_S8_UINT); But it doesn't seem to do anything at all! Can you think of anything that I'm doing wrong? Thanks
  11. simple raytracer...

    I couldn't see your raytracer screen shots, maybe you could give a correct link.., and about render time, I didn't any optimization on the code yet. May be I use BSP trees later..., also MFC waste a lot of performance.
  12. Here I just coded yet another raytracer, originally I got the idea from Jacco Bikker's article at flipcode, that's very good for the beginners. you can download the binary and source code here: http://www.gameprogrammer.org/downloads/raytracer.zip it supports Lua scripting, ambient, diffuse and specular lighting, raytraced shadows, reflection & refraction, super-sampling and linear Fog.