Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Member Since 29 Mar 2007
Offline Last Active Yesterday, 02:10 PM

#4965656 DirectX10 + HLSL Constantbuffer problem

Posted by MJP on 02 August 2012 - 04:08 PM

Your problem is that constant buffer sizes need to be multiples of 16 bytes, which in your case means that you need to round up the size from 28 to 32 bytes. You can round it up by doing (size + 15) / 16.

Either way if you enable the debug layer like Radikalizm suggests (which you do by passing D3D10_CREATE_DEVICE_DEBUG when creating your device), the runtime will tell you what the problem is.

#4965609 SpriteBatch with SharpDX

Posted by MJP on 02 August 2012 - 12:17 PM

I haven't seen anything, but Shawn Hargreaves (former XNA developer) did make a version of SpriteBatch for native D3D11. I'd imagine it shouldn't bee too hard to port that to SharpDX.

#4965260 PIX and shader model 5.0

Posted by MJP on 01 August 2012 - 11:56 AM

FYI PIX will debug vertex, geometry, and pixel shaders, but it won't debug hull, domain, or compute shaders. You can use vendor-specific tools to do that if you need to, although I'll warn you that Parallel Nsight requires a remote debugging target to debug shaders.

#4965257 Tips on abstracting rendering interfaces for multiple renderers?

Posted by MJP on 01 August 2012 - 11:46 AM

I've made a platform agnostic renderer using your method and abstract base classes, I found that it was a giant pain managing all the platform defines to make sure that the proper helper structures get included

We have our structures in one header file, with one other header file that includes the right header based on the platform. I can't imagine why you'd need more than that.

and I found that it was really difficult to abstract around all of the strange features of each renderer using the compile time solution.

How does compile-time polymorphism at all limit you in terms of your ability to abstract out higher-level features? You can do all of the same things you can do with abstract base classes (if not more), the only difference is you don't eat a virtual function call every time you need to do something. I mentioned dealing with the small, low-level building blocks of a renderer but you can also have different platform implementations of higher-level features.

Why do your prefer compile time to abstract base classes, and

Like I already mentioned, I prefer not having virtual function calls and indirections all over the place.

how do you handle platform scaling, like D3D11 feature levels or OGL levels?

I don't, because I don't care about them. I mainly deal with consoles, which obviously skews my preferences quite a bit.

#4964998 Deferred shading material ID

Posted by MJP on 31 July 2012 - 03:29 PM

Basically you can conceptualize the 3D texture as a bunch of 2D textures one right after the other, with the number of 2D textures being equal to the depth of the texture. So you'll fill up the first 2D slice, then the second, then the third, and so on. Then when you access the texture in your shader a texture coordinate Z component of 0 will correspond to the first slice, and 1.0 will correspond to the last slice.

#4964575 Tips on abstracting rendering interfaces for multiple renderers?

Posted by MJP on 30 July 2012 - 12:32 PM

Ugh, abstract bases classes. Not a fan.

For the most part I prefer low-level implementation functions and simple data structs, with the implementation of both being determined at compile time based on the platform I'm building for. So there might be a Texture.h with a function "CreateTexture", then a Texture_win.cpp that creates a D3D11 ID3D11Texture2D, then a Texture_ps3.cpp that does the PS3 equivalent, and so on.Then if you want you can build high-level classes on top of those functions.

You can actually use the same approach for more than just graphics, if you want. For instance file IO, threads, and other system-level stuff.

#4964568 Camera and Shaders Constant Buffers untied

Posted by MJP on 30 July 2012 - 12:17 PM

When you bind a constant buffer to a shader stage, it is available for any shader using that stage. So if you bind to the constant buffer to the vertex shader stage, any vertex shader you set can use that constant buffer. It doesn't matter if you set a new vertex shader. If you also want to use that constant buffer in the pixel shader stage, you need to bind it seperately to that stage. You have to bind it seperate for each stage, because it is possible that the vertex shader and the pixel shader might use a different set of constant buffers even within the same draw call. It is the same way for textures, samplers, etc.

#4963718 [DirectX 11] Sudden Saturday Shadow Sadness Syndrome

Posted by MJP on 27 July 2012 - 12:11 PM

"Gradient operations" refer to anything that computes partial derivatives in the pixel shader, and in this particular case it's referring to the "Sample" function. You can't compute derivatives inside of dynamic flow control, since they're undefined if one of the pixels in the quad doesn't take the same path. So you need to either...

A. Use a sampling function that doesn't compute gradients, such as SampleLevel or SampleCmpLevelZero


B. Flatten all branches and unroll all loops in which you need to compute gradients

#4963428 Does Shader Model 4 support matrices composed of 64-bit floating-point values

Posted by MJP on 26 July 2012 - 02:47 PM

SM4.0 has no double support. It's an (optional) feature that was added for SM5.0. You can check for it with ID3D11Device::CheckFeatureSupport. Wikipedia also has comparison tables for AMD and Nvidia GPU's that tell you which ones support double precision.

As for "double4x4" in particular, the HLSL docs say that "double" is a valid type for matrices.

#4963214 FXAA, why not use Depth

Posted by MJP on 26 July 2012 - 01:13 AM

Depth alone also isn't enough to detect discontinuities in normals or other material parameters at triangle edges.

#4963004 Function Calls in HLSL

Posted by MJP on 25 July 2012 - 11:55 AM

The compiler will always inline, and if optimizations are enabled it will aggressively optimize away parts of the function based on how it was called. I consider this a good thing, since it means that the GPU never executes unnecessary ALU or flow control instructions. Besides...even if you did get the compiler to spit out a call instruction, there's no guarantee that the driver won't just flatten it when it JIT compiles to microcode. In fact I'm not even sure that recent architectures even support function calls at a microcode level, I've never checked.

#4962674 MSAA Deferred Renderer

Posted by MJP on 24 July 2012 - 11:50 AM

Dropping in something like FXAA will take you an hour or two. Seriously, it's that easy. So I don't think between MSAA and FXAA, instead your decision should be whether it's worth your time to implement MSAA in addition to FXAA. AAA games have shipped with deferred rendering + MSAA...Battlefield 3 is is the first one that comes to mind. A lot of AAA games have shipped on PC without MSAA support, particularly those that use DX9.

Looking at the Steam HW survey, most people don't go over 1920x1080 resolution. Which means your 2560x1600 is an abnormally high-end case to consider. MSAA at 1920x1080 should definitely be doable on a high-end GPU, even with a deferred renderer.

#4961552 DirectX 10 skinned instancing

Posted by MJP on 20 July 2012 - 07:48 PM

Are your matrices transposed when you fill the texture with them?

#4961392 Multiple direct x applications on the same computer

Posted by MJP on 20 July 2012 - 12:19 PM

If each process creates its own device and context, then their state won't be shared. In other words, setting a vertex buffer on a context in one app won't affect the context in another app.

#4961060 HDRToneMappingCS11 what operator?

Posted by MJP on 19 July 2012 - 02:54 PM

With a geometric mean outliers have less of an effect on the result, which is pretty nice for auto-exposure. Otherwise if you some small really bright spots (such as emissive light sources) they will pull the average luminance up and your exposure may end up being too low. It also means that if you have small light sources that come on and off over time the exposure will change more drastically to try to compensate, which can look very bad.