ArthurD

Members
  • Content count

    8
  • Joined

  • Last visited

Community Reputation

100 Neutral

About ArthurD

  • Rank
    Newbie
  1. Soft-Edged Shadows

    This article is very well-written, the explanations are great and I feel like I understood the idea, thanks a lot! Now I'll have to try and implement that by myself, we'll see how it goes.
  2. Thanks a lot! This explains why I would get nothing if I didn't wrap my sampler. I can continue with the implementation with this issue solved.
  3. Hello everyone, I have been trying to implement the simple SSAO described here : [url="http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-simple-and-practical-approach-to-ssao-r2753"]http://www.gamedev.n...h-to-ssao-r2753[/url] inside a DirectX demo, but so far I have not been able to make it work. (The demo is based on Humus Framework). Edit : I forgot to state that this demo uses inverted z (far plane is z = 0, near plane is z = 1). I don't know if it is relevant with the problem I am encountering though. Here are the steps that I take in order to prepare the textures used for the SSAO : During my deferred rendering pass, I compute a position in View Space in the vertex shader, VPosition : [CODE] Out.Position = mul(ViewProj, In.Position); Out.VPosition = mul(View, In.Position); [/CODE] and in the Pixel Shader, I compute a normal from it : [CODE] Out.VNormal = normalize(cross(ddx(In.VPosition.xyz), ddy(In.VPosition.xyz))); Out.Position = In.VPosition;[/CODE] Position and VNormal are 2 render targets using the format DXGI_FORMAT_R11G11B10_FLOAT. In the SSAO pass, those 2 render targets are fed as input textures to the shader. In order to understand what is going wrong, I have kept only the smallest step from the algorithm. So here, I am computing the diff vector between the point located at (x+1,y) and the current point, and I look at the dot product between this diff and the normal. I would expect that every point on a plane surface should have 0 for this dot product, except if they are at the edge. [CODE] Texture2D <float4> Base; Texture2D <float3> Positions; Texture2D <float3> VNormals; SamplerState Filter; float4 main(PsIn In) : SV_Target { float4 base = Base.Sample(Sam, In.texCoord); const float2 vec[4] = {float2(1,0),float2(-1,0), float2(0,1),float2(0,-1)}; float3 p = Positions.Sample(Filter, In.texCoord); float3 n = VNormals.Sample(Filter, In.texCoord); n = normalize(n); float2 coord1 = vec[0]; float3 diff = Positions.Sample(Filter, In.texCoord + coord1) - p; const float3 v = normalize(diff); base.rgb = float3(dot(n,v),dot(n,v),dot(n,v)); return base; }[/CODE] However, as you can see on this screen, this dot product is not at all what I expected it to be, so in turn it is not surprising that the SSAO doesn't work at all. Any help would be appreciated ! [img]http://img3.imageshack.us/img3/83/ssaodotnv.jpg[/img] If it can be helpful, here is the VNormals texture. [img]http://img268.imageshack.us/img268/1262/ssaovnormal.jpg[/img]
  4. DX11 [DX11] Depth Bug

    Thanks unbird! I didn't quite catch that the cull mode was the cause of this bug, it solved it all. Thanks a lot! And thank you MJP, I had not thought of the problems my far plane could create, I changed it to a more reasonable value.
  5. Hello, I have a bug in the depth of my program that causes a very weird looking glitch. I made a video of the problem here : Note that in this video, the model is simply rotating clockwise. However, because the depth is "inverted", it seems to change its rotation a few times. [media]http://www.youtube.com/watch?v=TI7l8olrk10[/media] There are 2 bugs, but I think they are linked to each other : - The part of the model that is displayed is not the closest one, but the one that is the farthest away instead. - The model gets bigger as it gets away from the camera instead of becoming smaller. If I invert the depth DepthFunc, the first problem is solved and it is always the closest polygons that are displayed, but the second problem is still present which makes me think that I didn't solve the bug, just worked around. The problem is that I do not know exactly where to look for the problem, so I don't know exactly what part of the code to show you. I thought that I had failed my projection Matrix, but I don't see anything wrong with it : [code] const float SCREEN_DEPTH = 100000.0f; const float SCREEN_NEAR = 0.1f; fieldOfView = (float)D3DX_PI / 4.0f; screenAspect = (float)screenWidth / (float)screenHeight; D3DXMatrixPerspectiveFovLH(&m_projectionMatrix, fieldOfView, screenAspect, SCREEN_NEAR , SCREEN_DEPTH );[/code] And here is the depth description : [code] depthStencilDesc.DepthEnable = true; depthStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS; [/code] Thanks for your time reading this, any help is welcome!
  6. Exactly, you have pinned down the origin of your problem yourself, a WORD is a 16 bit integer, so your unsigned indice can't go over 65535. If you use a DWORD instead (double word), you have up to 32 bits to store your indices.
  7. the missign float4 solved the deal.Thanks a lot! And thanks for the optimisations, I corrected my code with them!
  8. Hi Everyone, I have been learning DirectX 11 on my own recently, and decided to try to put this into practice by creating a program to load Doom 3 Models. Right now, the loading is done right, but I am having a problem with the shaders and this is why I have come here to ask you. The basic diffuse part works well as you can see in this code and in this picture : (The color values are hardcoded in the shader to be sure that they are right and not have to jump between files while I fix this) (By the way, my model has no head right now, I have only loaded the first mesh until I fix this ) [code]float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 diffC; float4 ambientC; diffC = (1.0f, 1.0f, 1.0f, 1.0f); ambientC = (0.1f, 0.1f, 0.1f, 1.0f); textureColor = shaderTexture.Sample(SampleType, input.tex); lightDir = -lightDirection; lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { color = (diffC * lightIntensity); } else { color = (0.0f, 0.0f, 0.0f, 0.0f); } color = saturate(color); color = color * textureColor; return color; }[/code] [img]http://img14.imageshack.us/img14/741/diffuse.png[/img] Now if I try to add a simple constant ambient component on top of that, it does not add correctly, but sets everything to plain 1.0 lighting instead : [code]float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 diffC; float4 ambientC; diffC = (1.0f, 1.0f, 1.0f, 1.0f); ambientC = (0.1f, 0.1f, 0.1f, 1.0f); textureColor = shaderTexture.Sample(SampleType, input.tex); lightDir = -lightDirection; lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { color = (diffC * lightIntensity); } else { color = (0.0f, 0.0f, 0.0f, 0.0f); } color += ambientC; color = saturate(color); color = color * textureColor; return color; }[/code] [img]http://img840.imageshack.us/img840/1451/diffuseplusambient.png[/img] But if I add the color components myself, I get the effect I wish. I don't understand what is different in this from a simple addition ? [code]float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 diffC; float4 ambientC; diffC = (1.0f, 1.0f, 1.0f, 1.0f); ambientC = (0.1f, 0.1f, 0.1f, 1.0f); textureColor = shaderTexture.Sample(SampleType, input.tex); lightDir = -lightDirection; lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { color = (diffC * lightIntensity); } else { color = (0.0f, 0.0f, 0.0f, 0.0f); } color.x += 0.1f; color.y += 0.1f; color.z += 0.1f; color.a += 1.0f; color = saturate(color); color = color * textureColor; return color; }[/code] [img]http://img844.imageshack.us/img844/9791/objective.png[/img] I am open to any help or pointer as to what I am doing wrong. Thanks for reading this.