// 1. pos.xy = in.vertexPos.xy; // in vertex shader ... float depth = tex2D( sceneDepthMap, texcoords.xy ).r; float4 pPos3D = mul( float4(pos.x, pos.y, depth, 1.0), InvProj ); pPos3D.xyz = pPos3D.xyz / pPos3D.www; pPos3D.w = 1.0f;Maybe I'm using the wrong matrix for "invProj". If I understand it well, its the "inverse projection matrix" (I'm using OpenGL). I tried other matrices as well though (inverse modelview projection matrix). The other way:

// 2. pos.xy = in.vertexPos.xy; // in vertex shader viewVector = pos.xyz - camPos.xyz; ... viewDir = normalize(viewDir); float depth = tex2D( sceneDepthMap, texcoords.xy ).r; float3 pPos3D = cameraPos.xyz + viewVector.xyz * pDepth;I suppose 'viewVector' is not correct here... Both ways give wrong results. If I compare it with the real 3D position as a color, its just totally different. My lacking knowledge about matrices and "spaces" is probably causing the problem... Anyone an idea what goes wrong? Greetings, Rick