Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Mar 2007
Offline Last Active Today, 01:02 AM

#4952128 D3DXComputeTangentFrameEx help to decipher documentation

Posted by MJP on 23 June 2012 - 04:46 PM

I dug up this code that I used to use to generate tangents for a mesh that already has normals and texture coordinates:
D3DXComputeTangentFrameEx(clonedMesh, D3DDECLUSAGE_TEXCOORD, 0,
   -1.01f, -0.01f, -1.01f, NULL, NULL);

#4952123 XNA 4.0 - shimmering shadow maps

Posted by MJP on 23 June 2012 - 04:35 PM

Yes, that direction would be direction towards the directional light, basically what you would use for N dot L.

I'm not sure why your shadows would disappear...I would try to break into the debugger when that happens and step through your code for creating the shadow matrices. Capturing in PIX/Nsight/PerfStudio might be helpful too.

#4951526 Stable Cascade Shadows

Posted by MJP on 21 June 2012 - 04:38 PM

The image is from ShaderX6, if you're interested. The basic idea is that for each cascade, you partition the viewing frustum using a depth range and then build the 8 corners around the slice of the frustum representing that particular partition. Normally you build a light-space AABB around those corners to get a tight fit, but with stable CSM you use a bounding sphere instead (which you can actually do in world space instead of light space, since it doesn't matter for a sphere). Then you also do a little math to round off the translation of each partition, so that they only move in texel-sized increments.

#4951081 Problem computing tangent on a vertex (with changing texture direction)

Posted by MJP on 20 June 2012 - 12:50 PM

Yeah what our engine will do is generate tangents on the expanded vertices (3 unique vertices per triangle), then try to merge them when it's finished. In most cases they will merge fine and you'll end up with shared verts, but in a case like this one where the tangents will point away from each other we'll end up leaving them unique which essentially splits the vertex.

#4950323 DIRECT X

Posted by MJP on 18 June 2012 - 12:14 PM

Indeed, this forum is for questions related to programming and game development.

#4950322 ID3D11DeviceContext::OMSetRenderTargetsAndUnorderedAccessViews(...)

Posted by MJP on 18 June 2012 - 12:12 PM

As far as I know the effects framework won't touch render targets, but I could be wrong. I would just capture the frame in PIX, and look at the API calls + device state leading up to your draw call.

#4950175 Texture Sharing b/w OpenGL and DirectX

Posted by MJP on 18 June 2012 - 02:44 AM

Yeah the byte order is backwards in D3D9. They fixed it for D3D10.

#4949966 HLSL semantic & Xna data type problem

Posted by MJP on 17 June 2012 - 01:25 AM

DX9 shaders don't support integer operations or data types, they only work with floating point. Consequently, all vertex element formats will either specify a floating point format or will convert from integer to floating point. You can use the uint type in your HLSL code, but when the shader is compiled it will only use floating point instructions.

#4949866 The 10000 box challenge

Posted by MJP on 16 June 2012 - 01:18 PM

I hit 100k no problem with my i7 2600K and AMD 6950, even with rotations and normal mapping turned on. You puny mortals with your laptops can bow before the might of my desktop! Posted Image

#4948668 Buffer creation

Posted by MJP on 12 June 2012 - 05:12 PM

What about the rest of the members of D3D11_BUFFER_DESC? Do you set them somewhere else in your code? If not they will not be initialized, and they will have garbage values. Also if you are creating a static vertex buffer that the GPU will not write to, then you should use D3D11_USAGE_IMMUTABLE instead of D3D11_USAGE_DEFAULT.

When you create a buffer, memory is allocated somewhere either in GPU memory or in system memory. If you provide initialization data, then that data is copied to that newly-allocated memory. So you don't need to hang onto your own copy of the data.

#4948586 Taking advantage of bottlenecks, kind of

Posted by MJP on 12 June 2012 - 12:17 PM

As far as shader goes your primary hardware resources are shared among all shader stages (at least on DX10+ hardware), so you're not really going to have any hardware that's idling when performing a full screen effect. However executing a single instance of a geometry shader probably isn't going to make the slightest bit of difference, so if that's convenient for you then go for it. Alternatively you can also use a vertex shader that sets the appropriate vertex position based on SV_VertexID, and that would also let you avoid binding a VB + IB + InputLayout without having to use a geometry shader.

#4947485 preprocssor defines not allowed in compile shader

Posted by MJP on 08 June 2012 - 03:03 PM

No, you can always use preprocessor macros. The error is probably on the line before that, either a missing semicolon or something similar.

#4946955 Lambert and the division by PI

Posted by MJP on 06 June 2012 - 11:37 PM

The energy conservation thing is a little tricky to understand at first. This is because when you render you usually only deal with the amount of reflected energy that goes towards the viewer, but energy conservation is concerned with the amount of energy reflected in all directions. Formally you definite it as exitant irradiance <= incident irradiance, or


where Eo is defined like this:


where f is our BRDF. Conceptually you can imagine this as moving the camera everywhere around the hemisphere surrounding the normal, applying the BRDF, and summing up the amount that's reflected toward the camera. This is very different from saying "the amount of energy reflected in the view direction should be less than or equal to the incident lighting", which isn't required to be true for energy conservation. In fact with a physically-based specular term the specular reflection can be many times greater than the incident irradiance.

If we use Lambertian diffuse as our BRDF, we can derive the 1/pi factor required for energy conservation. For Lambertian our BRDF = DiffuseAlbedo, so if we assume albedo = 1 then our BRDF drops out completely. If we then assume that our only incident irradiance comes from a directional light with intensity = 1.0 that's exactly perpendicular to the surface, then Ei = 1.0 and that also drops. Which leaves us with this, once we convert from hemispherical integral to spherical double integral form:


If we solve that integral we get a result of pi, which means that if we multiply our diffuse by 1/pi then we'll satisfy the energy conservation inequality.

#4946949 Structured buffer float compression

Posted by MJP on 06 June 2012 - 10:54 PM

Well... no. I have not.
Do I have to consider this with structured buffers?

No, you don't. It's totally legal to access structures with a stride that's not a multiple of 16 bytes.

#4945903 Trying to understand the BRDF Equation

Posted by MJP on 03 June 2012 - 03:00 PM

It's not really a circular definition...it's just rearranging an equation. If you say that y = z * x then you can also say that z = y / x. In practical terms for graphics you're never going to "solve" for the BRDF, you're going to start with one based on some material model you're using for the surface and use that to solve for outgoing radiance.

Here's a really simple Blinn-Phong example for a directional light:
// Compute incident irradiance
float3 Ei = LightColor * saturate(dot(Normal, LightDirection));

// Compute the BRDF for the given normal + light direction + view direction
float3 HalfVector = normalize(LightDirection + ViewDirection);
float3 BRDF = pow(saturate(dot(Normal, HalfVector), SpecularExponent);

// Calculate outgoing radiance by applying the BRDF to incident irradiance
float3 Lo = BRDF * Ei;