Jump to content

  • Log In with Google      Sign In   
  • Create Account


Advice for workflow and organization using Effects11


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 NotTakenSN   Members   -  Reputation: 149

Like
0Likes
Like

Posted 17 September 2012 - 11:11 PM

I'm currently using the Effects11 framework, and I find it very convenient to organize and compile my shaders, set the resources, make the draw calls, etc. But now I'm running into a problem with my compile times. Inside my effects file, I have a vertex, geometry, pixel, and two compute shaders, all of which are necessary for a technique I'm designing. One of the compute shaders is very long (about 900 instructions without any unrolling) and takes over 3 minutes to compile. I have finished working on that compute shader and do not need to make any more changes to it, but when I make changes to the other shaders in the effects file, I have to recompile the entire effects file, which includes waiting 3 minutes for the big compute shader to recompile. This is quite inconvenient when I'm trying to debug the shaders.

Is there a way to exclude a specific shader from recompilation? Or do I need to create a new effects file? What is your preferred workflow when using the effects11 framework, or do you even use the effects11 framework? And do you lump all of your shaders under one big effects file, or do you separate them into smaller ones? I appreciate your replies.

Sponsor:

#2 MJP   Moderators   -  Reputation: 10550

Like
1Likes
Like

Posted 17 September 2012 - 11:31 PM

Are you using the June 2010 SDK? If so you might want to try using the new version of the shader compiler from the Windows 8 SDK, which I've found to much faster for some compute shaders that used to take minutes to compile.

Anyway there's no way I know of to only compile certain shaders in an effect file. As far as I know you have to compile the entire thing. I don't use the effects framework personally, and I don't see to many other people using it. It used to be nice in D3D9, but for D3D11 it just seems like its clunky and no longer maps very well to the API (especially when it comes to constant buffers). At work we have our own complex in-house system for managing shaders and materials, so we don't need effects. At home I find it easier to just work with shaders and constant buffers directly.

#3 NotTakenSN   Members   -  Reputation: 149

Like
0Likes
Like

Posted 18 September 2012 - 12:12 AM

Thanks for your great reply, as always, MJP. I am using the June 2010 SDK, so I'll definitely take a look at the Windows 8 SDK. I suppose it's time for me to abandon the Effects11 framework, since Microsoft doesn't really even support it anymore. I just thought it might be common practice, since Frank Luna's book Introduction to 3D Game Programming with DirectX11 used it. Would you happen to have a good source for working with shaders and buffers directly (or through a self-developed system), as well as compiling shaders offline properly (I've stumbled across certain Microsoft documentation talking about aligning resources to the correct slots across multiple shader files)? The Microsoft documentation can be frustratingly sparse, so I would definitely prefer a good book or website.

#4 NotTakenSN   Members   -  Reputation: 149

Like
0Likes
Like

Posted 18 September 2012 - 05:12 PM

I installed the Windows 8 SDK and tried using the fxc compiler included in the kit, but now it won't compile. Apparently it doesn't like my use of group syncs inside loops that depend on uav conditions (even though I've specified the allow_uav_condition flag). Weird thing is that the compiler in the June 2010 SDK doesn't have any problems and my shader runs exactly how I want it to. Should I stick with the older compiler, or should I be concerned that the new compiler doesn't like my code? Is the new compiler more strict about thread syncs? In my shader, all the threads in a group read from the same UAV address, which determines the flow in the loop, so all the warps in the group should be following the same flow... don't know why it's generating an error in the new compiler.

Another possibility is that I'm not setting up the project correctly to use the new compiler. I don't want to switch entirely to the Windows 8 SDK (I'm using some D3DX functionality), so the only thing I changed was the executable directory in the project properties to the Windows 8 SDK bin directory. Does the compiler need the new libraries and headers, or can it just use the ones in the June 2010 SDK?

Edited by NotTakenSN, 18 September 2012 - 05:32 PM.


#5 MJP   Moderators   -  Reputation: 10550

Like
1Likes
Like

Posted 18 September 2012 - 11:06 PM

You can use the compiler separately, you don't need to use the new headers or libs. I'm not sure why it doesn't like your code, I haven't seen similar behavior myself.
Compiling shaders offline isn't terribly complicated, it's pretty much the same as compiling an effect. You can either use fxc, the D3DX functions, or the D3DCompile functions exported by D3DCompiler_xx.dll. In all cases you pass the shader code (or the file where it's found), the entry point of your shader, some flags, and the profile. Most of that is the same things you specify when you declare a technique in an effects file.

As far as resources and shaders go, everything is done with a "slot" system. If you look through the "resources" section of this free book there's an explanation, but I'll give you a brief one as well .For each resource type that a shader can use there are are a set of "slots", referred to by registers in the shader code. There are "t" registers for shader resource views, "c" registers for constant buffers, "u" registers for unordered access views, and "s" registers for sampler states. When you declare a resource type in your shader code, the compiler will automatically assign it a register with the appropriate type. So for instance, say you have a pixel shader that declares a Texture2D. The compiler will assign it to register "t0", which corresponds to the 0th shader resource view slot for the pixel shader stage. If you declared a Texture2D and then a StructuredBuffer, the Texture2D will get assigned t0 and the StructuredBuffer will get assigned t1. In your C++ code, those slots then correspond to the slots you use when calling PSSetShaderResources. So to set register t0 and t1, you would pass an array of two ID3D11ShaderResourceView's and specify 0 as the start slot, which would bind the two SRV's to slot 0 and slot 1. Each shader stage has its own set of slots, so if you use a StructuredBuffer in your vertex shader stage then it will get assigned to slot 0 and you would use VSSetShaderResources to bind an SRV to that slot.
One thing you have to watch out for is that a resource will only get assigned a slot if it actually gets used in your shader. So if you just declare a texture and don't use it or the call to Sample gets optimized away, the compiler won't assign it a slot. There are two things you can do to help with this:

1. You can manually assign slots using the "register" syntax:

Texture2D DiffuseMap : register(t0); // assigns the texture to slot 0

2. You can query the slot of a resource at runtime using the reflection interface, primarily through ID3D11ShaderReflection::GetResourceBindingDesc or ID3D11ShaderReflection::GetResourceBindingDescByName. This is what the effects framework does under the hood to figure out what slot to use when you pass a name or handle.

Constant buffers are the most complicated part. Each constant buffer itself uses the slot system, so when you declare a constant buffer in a shader it will get assigned a register or you can assign it one manually. You then bind a constant buffer in your app code using *SSetConstantBuffers. However you also have a bunch of variables within the constant buffer itself. These variables get packed at some offset from the start of the buffer, similar to how members of a struct get packed at an offset. However HLSL has special rules for how variables are packed and aligned, which you can find here. Typically you work with constant buffers by creating an ID3D11Buffer of the appropriate size with D3D11_BIND_CONSTANT_BUFFER, and D3D11_USAGE_DYNAMIC then you use ID3D11DeviceContext::Map to fill the contents of that buffer with the value of the variables. It's possible to use the reflection interface to find the offset of a particular variable, which you can then use to memcpy the value of that variable into the proper offset from the pointer that you get back from Map. However a simpler solution is to just make a C++ struct that matches the layout of your constant buffer. Then you can just cast the pointer from Map to that type, and set all of the values. Or you can fill out the C++ struct ahead of time, and memcpy all of its contents into the pointer from Map. Either way you need to be careful of the HLSL packing rules. Typically this requires either adding padding to your C++ struct to make variables line up on 16-byte boundaries, or using __declspec(align(16)) to accomplish the same thing.

The book in my signature that I worked on also has an in-depth explanation of how all of this works, if you're ever looking for something to buy on Amazon. :D




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS