Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


MJP

Member Since 29 Mar 2007
Offline Last Active Today, 12:00 PM

#4994530 Problem Setting Constant Buffers

Posted by MJP on 27 October 2012 - 03:29 PM

A bool will be 1 byte in VC++ and 4 bytes in HLSL. You should use uint32_t (or some equivalent type) to represent an HLSL bool in your C++ structure.


#4993492 Problem with ID3D11DeviceContext::Map()

Posted by MJP on 24 October 2012 - 12:15 PM

You want to use DISCARD in this case, and completely fill the entire buffer with new data. Doing it this way allows the driver to avoid synchronization between the CPU and GPU.


#4992243 Use of Bitmap with DirectX11

Posted by MJP on 20 October 2012 - 01:52 PM

I'm assuming that you're using either SlimDX or SharpDX? If you're using SlimDX, the easiest option is probably to save the Bitmap to a Stream and then pass that Stream to Texture2D.FromStream. This will perform any necessary format conversions for you (such as going from RGB to RGBA) and can also generate mipmaps. Otherwise you'll have to obtain the pointer to the raw bitmap data using LockBits, loop through the data and convert the format if necessary, then pass your final data to the Texture2D constructor through a DataRectangle. For going back to a bitmap, there's Texture2D.ToStream which will save the texture as a bitmap file in memory. Then you can pass that to when constructing a Bitmap and it will be initialized from the texture data.

I'm not too familiar with SharpDX so I can't give specifics, but it should support similar functionality.


#4992236 HDR and translucent objects,and water reflection....

Posted by MJP on 20 October 2012 - 01:25 PM

If you want your transparent geometry to fit in with your HDR scene, you need to light them properly with the same overall lighting intensity that you use for your opaques. You can't just composite them in afterwards with LDR lighting. If you have emissive geometry that generates its own lighting, then you need to be able to set an HDR intensity value so that it will fit in with the scene.


#4991944 Using D3D11_QUERY_OCCLUSION

Posted by MJP on 19 October 2012 - 03:35 PM

The reason you can't find much information is because not many people are doing it. Plenty have tried, and failed. It's really hard to make occlusion queries work for a general visibility system, due to the latency and batching problems. The more "en vogue" technique at the moment is software depth buffer rasterization + occlusion testing.


#4991566 Low resolution shadow maps

Posted by MJP on 18 October 2012 - 03:51 PM

I've done some experimentation, mostly in an attempt to fake shadows from large area light sources. It mostly works, but I had a lot of problems avoiding biasing artifacts. If you use a large PCF kernel (I was using 7x7) the size of that kernel in world space can be huge, which throws off most biasing techniques. (E)VSM is probably a more promising approach, but I think you'd still run into issues.


#4990595 newbie Alpha blend / transparency question.

Posted by MJP on 15 October 2012 - 08:51 PM

With standard alpha blending, you need to draw primitives in back-to-front order to get the right result. It's a major limitation of rendering transparent geometry this way, and it's a huge pain in the ass to deal with in a production scenario. Most games just do some sort of coarse per-mesh or per-object sort by depth and hope for the best.

There are specialized rendering techniques that give you order-independent transparency, but they can be somewhat complex and have a performance cost associated with them.


#4990506 Direct3D11 crashing display driver

Posted by MJP on 15 October 2012 - 02:36 PM

Probably the most typical case is when the GPU spends a long time doing work and the display driver times out.


#4990451 Lots of reflections- how?

Posted by MJP on 15 October 2012 - 11:50 AM

Yeah the common approach is to place reflection probes in the scene, and then pre-render environment maps at the probe locations. When you do this you can also filter the mip levels such that they somewhat match the specular response for different roughness values (specular powers).


#4990246 If statement weird behavior

Posted by MJP on 14 October 2012 - 09:53 PM

The compiler is almost certainly flattening that branch, or moving the texture sample outside of the branch. This is because you can't use Texture2D.Sample inside a branch, since that function requires the gradients of the texture coordinate in order to perform mip level selection and/or anisotropic filtering. Gradient operations are undefined inside of a branch, since the neighboring pixels in the 2x2 quad might not take the same path through code. So if you want to branch around texture samples, you need to do something like this:


float2 tcGradX = ddx(pin.Tex);
float2 tcGradY = ddy(pin.Tex);

// Force the compiler to issue a branch instruction
[branch]
if(gUseNormalMap)
{
  float3 normalMapSample = gNormalMap.SampleGrad(samLinear, pin.Tex, tcGradX, tcGradY).rgb;
  pin.NormalW = NormalSampleToWorldSpace(normalMapSample, pin.NormalW, pin.TangentW);
}
else
  return float4(0, 1, 0, 1);



#4990099 XNAMath vs D3DX10Math && *.fx files vs *.psh and *.vsh

Posted by MJP on 14 October 2012 - 12:10 PM

Yeah DirectXMath is just the new name for XNAMath, you can think of it as the latest version of XNAMath. Most of it is exactly the same as XNAMath, so the things I said about DirectXMath apply to XNAMath as well (including what I said about high performance math code).

If you look at the samples, you'll actually noticed that MS stopped using effects for the D3D11 samples.


#4989888 Best way to filter for a bloom effect

Posted by MJP on 13 October 2012 - 03:03 PM

The "best way" in my opinion is not to use a threshold at all. A step function is ugly and will cause aliasing. A more natural approach is to just use a lower exposure for your bloom pass, which will naturally subdue to darker areas while allowing brighter areas to remain visible in the end result.


#4989884 XNAMath vs D3DX10Math && *.fx files vs *.psh and *.vsh

Posted by MJP on 13 October 2012 - 02:55 PM

There are a few things to consider:

1. D3DX is essentially deprecated at this point. The library is totally functional and there's nothing "wrong" with it per se, but it is no longer being developed. Any future updates will be for DirectXMath (DirectXMath is the new name for XNAMath), and not for D3DX.

2. DirectXMath is designed to map well to modern SIMD instruction sets (SSE2-4 and ARM NEON), and allows for high performance math code if used right. D3DX math can use older SSE instructions, but it does so in a sub-optimal way. One result of this is that DirectXMath has a steeper learning curve, and generally requires you to write more code since you have to explicitly load and store SIMD values. However it's possible to write a wrapper that simplifies the usage, if you don't care very much about performance.

3. DirectXMath can be used in Windows 8 Metro apps, and like I mentioned earlier supports ARM instructions. So if you ever want to release on Windows Store, you can't use D3DX at all.

With all of that said, my recommendation would be to go with DirectXMath.

Now when you talk about .fx files, what you're really talking about is whether or not you should use the Effects Framework. The Effects Framework has a bit of a weird history. It started out as part of D3DX, and was essentially a support library that helped you manage shaders and setting constants, textures and device state. Then in D3D10 it became part of the core API, and was moved out of D3DX. Then for D3D11 they moved it out of both, stripped out some functionality, and provided it as source code that you can compile if you want to use it. Once again there are a few considerations:

1. Like I said before the Effects Framework is a helper library that sits on top of D3D. It helps you manage shaders and states, but it doesn't do anything that you couldn't do yourself with plain shaders and core API's.

2. In D3D9 the Effects Framework provided a pretty good model for mapping to the shader constant setup used by SM2.0/SM3.0 shaders, as well as the render state API. For D3D10 and D3D11 it is no longer such a good fit for constant buffers and immutable state objects, at least in my opinion. Like I mentioned earlier certain functionality was stripped out for the Effects11 version, which also makes it less useful than it used to be.

3. Like D3DX you can't use it for Metro applications. This is because it uses D3DCompiler DLL to compile shaders and obtain reflection data, and this functionality isn't available to Metro apps.

Personally, I wouldn't recommend using Effects11. It's not really very convenient anymore, and I feel like you're better off just getting familiar with out how shaders, states, constant buffers, and resources work in the core API.


#4989605 ShaderReflection; stripping information from an Effect

Posted by MJP on 12 October 2012 - 05:15 PM

When you declare a constant buffer in a shader, the shader doesn't really care about the actual D3D resources that you use to provide the data. So for instance if you have a shader with this constant buffer layout:

cbuffer Constants : register(b0)
{
    float4x4 World;
    float4x4 ViewProjection;
}

When you compile a shader with this code, there's no allocation of resources for that constant buffer or anything like that. All that code says is "when this shader runs, I expect that constant buffer with 128 bytes (32 floats * 4 bytes) should be bound to slot 0 of the appropriate shader stage". It's then your application code's responsibility to actually create a constant buffer using the Buffer class with the appropriate size and binding flags, fill that that buffer with the data needed by the shader, and then bind that buffer to the appropriate slot using DeviceContext.<ShaderType>.SetConstantBuffer. If you do that correctly, your shader will pull the data from your Buffer and use it.

Now let's say you have two vertex shaders that you compile, and both use the same constant buffer layout in both shaders. In this case there is no "duplication" or anything like that, since it's your responsibility to allocate and manage constant buffer resources. So if you wanted to, it's possible to share the same Buffer between draw calls using your two different shaders. You could bind the buffer, draw with shader A, and then draw with shader B, and both shaders will pull the same data from the buffer. Or if you wanted, you could set new data into the buffer after drawing with shader A, and then shader B will use the new contents of the buffer. Or if you wanted you could create two buffers of the same size, and bind one buffer for shader A and bind the other for shader B.

An interesting consequence of this setup is that you don't necessarily need the exact same constant buffer layout in two shaders in order to shader a constant buffer. For instance shader B could just have this:

cbuffer Constants : register(b0)
{
    float4x4 World;
}

In that case it would be okay to still use the same constant buffer as shader A, since the size of the buffer expected by shader B is still less than or equal to the size of the constant buffer that was bound. But it's up to you to make sure that in all cases the right data gets to the right shader. In practice I wouldn't really recommend doing something like I just mentioned, since it can easily lead to bugs if you update a constant buffer layout in one shader but forget to do it in another. Instead I would recommend defining shared layouts in a header file, and then using #include to share it between different shaders.


#4988817 Unreal 4 voxels

Posted by MJP on 10 October 2012 - 01:25 PM

The indirect light does indeed get shadowed due to the cone tracing, although not perfectly due to the approximations introduced by the voxelizations and the tracing itself. At the SIGGRAPH presentation they mentioned that they were still using SSAO to add some small-scale AO from features that weren't adequately captured by the voxelization, but I think that's a judgement call that you'd have to make for yourself.




PARTNERS