It isn't entirely clear what you are asking for here, so I'll take my best shot. The way that you define your thread group and dispatch dimensions is usually intended to model your dataset that is being processed. Generally, if the data set is a 2D texture, then your dispatch and thread groups are sized accordingly to address chunks of the 2D region (and your z coordinates will be 1). Alternatively, if you are processing a 1D buffer of data, then you would just use the x-coordinate for addressing.
The linear index for a complete dispatch is available in the SV_GroupIndex, which includes all thread groups together. This is good when you are processing a linearly stored data set. Otherwise you can choose one of the other System Value semantics (SV_GroupID, SV_GroupThreadID, or SV_DispatchThreadID) to get a better addressing for your problem.
The group shared memory is intended for sharing intermediate results among the threads in a thread group. So typically you would use the SV_GroupThreadID system value to address it, since it is local to the thread group. If you wanted a linear index for your group shared, then you can either choose a 1D thread group shape, or you can flatten your indices manually based on the shape of the thread group. However, I would highly recommend to just choose your thread group shape according to the data you are processing so that you can skip any additional math in your shader.
What is your motivation at this point? It seems that you have covered all of the beginner and intermediate materials, so you will increasingly be specializing as you go forward. In order to do that, you will have to decide what it is that you want to accomplish.
For example, do you want to dive deeper into GI topics, focus on GPU performance, or perhaps shadow rendering? What tools do you want to add to your toolbox???
If you think about it, any object that wants to be notified of an event has to already be aware of what that event is, right? If so, then you can have an arbitrary std::function signature for each event type, and each handler that gets registered will be of the correct type (or else the registration of the handler wouldn't have the right signature).
To handle multiple event signatures, I would suggest that you either have your event manager allow event types to be registered with it - or you could even do away with event managers all together and just have objects registering with one another. The former provides a nice single point of access for monitoring events, while the latter gives you significantly more flexibility to add new systems without a centralized event type ID system.
Are you looking for an editor for your own engine, or just something that has an editor built in? There is probably not many editors that can work directly with your own engine, since there is no easy way that it could possibly know what you are doing under the covers.
Perhaps you just want a modeling package for .X files? If so, I believe Milkshape3D has a .X exporter that you could try out. I haven't personally used it, but it may provide a low cost solution (not open source though...).
Otherwise, if you want a full solution with an engine and editor built in, you could always check out Unity3D. This is again not open source, but it is much more fully featured than most editors out there anyways...
RWStructuredBuffer<uint> is the resource representation in HLSL, and you use a UAV to bind the resource to the pipeline on the C++ side. So in that case, the UAV and the RWStructuredBuffer<uint> are representing the same resource, just in different domains.
CopyResource is a way to copy the contents of one resource to another. If you are starting out with an index buffer, and you want to copy it to another resource that has the ability to be bound as a UAV, then you would use copy resource to do that prior to invoking your compute shader.
The RWStructuredBuffer is actually the source of your data within the compute shader, and you will also need to write the sorted data back into it. That's where the name RW comes from - it is for reading and writing.
It sounds to me like you are unfamiliar with resources in D3D11 - I would recommend spending a bit of time with a good book to get familiar. Otherwise it will be a long hard struggle to get something trivially working... Any time that you invest in learning about resources is time well spent, so go for it!
Does anyone know a way to access the vertices and indices buffer from a compute shader?
The only way to get data into a compute shader is through a UAV, an SRV, or a constant buffer - so your input to the shader has to be one of those three options. Constant buffers are probably not a good choice, since their data can't change (hence the constant name ). SRVs are probably also not a great choice, since you will need to eventually store the data back to a buffer after you have sorted it. So most likely you would want to use a UAV to access the data, sort it, and then store it back to the UAV resource. Another option is to read the data from the SRV and write it to a UAV, which may help you with managing the data in flight.
Be aware that there are restrictions about which bind and usage flags a vertex buffer can have (test them out yourself - try creating a buffer resource with multiple combinations of bind flags, and also usage flags as well). This may force you to make an extra buffer that serves as a go-between for your algorithm passes.
(It's awesome to see Jason Zink replying in this thread - his book "Practical Rendering and Computation with Direct3D 11" was an invaluable reference for me while I was figuring out how Direct3D 11 works, so that I could borrow some of its API for Rasterizr. Thank you Jason!)
I'm happy that it was useful for you!
Your debugging tool looks pretty nice - do you have any advice to share about implementing such a system? I have been considering adding a tool like this to Hieroglyph 4, and any advice or lessons learned would be very welcome indeed!
You use a polymorphic function when you want to refer to a group of objects by their common base class, even if each object can have different behavior. This is useful when the objects are created at runtime and are not known when you are writing your code. Regardless of which type of mammal it is, you can access its methods with a common, well defined interface. This is typically called dynamic polymorphism since the types can be dynamically (at runtime) determined.
An alternative method would be to use template functions which would produce a different version of a function for each of the mammal sub-classes when the template was instantiated. Using a generic method like this provides static polymorphism, since the types are determined at compile time (i.e. when each template is instantiated, the types are fully known).
The two cases provide different benefits (generic access to objects), and both have some detriments (runtime cost for dynamic, compilation time and code size for static).
The only benefit that I could think of is that you would already have access to the IDXGIFactory in case you wanted to create additional swap chains (for multi-windowed rendering), but that can easily be re-acquired through a few COM calls on the device interface itself. So there really isn't too much difference between the two from a functional perspective.
First off, welcome to the community Second, you are right that most people probably aren't familiar with the framework you are using. Even so, there are some pretty knowledgeable people hanging around here, so there should be some help for you somewhere...
It isn't really clear to me what the problem is now - do you have a problem with the shader itself, or is it that you aren't able to get the shader to be applied at all? The former can probably be helped here in the forums, whereas the latter is probably something better in the game developers forums (the Oblivion forums I mean).
Perhaps a screenshot would help us to understand the problem.
Tiago has the story mostly right. Wolfgang Engel originally published Programming Vertex and Pixel Shaders, and I think his plan was to publish a second edition that included the expanded scope of D3D10 with Geometry Shaders. That effort never made it to the dead tree version, and several of us joined together to finish up the book and publish online. So the Amazon listing for the second edition was actually never released.
I also agree with Tiago's comment about the newer contents. Unless you have special requirements to use D3D10, you are much better off going with D3D11 for a variety of reasons. You can target D3D9+ hardware with the D3D11 API, and it provides a few nice features that make it more usable than D3D10. The topic has been discussed many times here on the forums, so a few searches should give you lots of detail on the topic.
I will look into Visual Studio® 2013 Express when I can.
Do you have a win8 machine around anywhere? The issue is that the graphics debugger is only available in the Windows Store version of VS2013. You can still use it to debug desktop apps, but since the Store version won't install on Win7 then you need to do remote debugging from a Win8 machine. That isn't the best scenario, but it is still better than having PIX crash on you!
Which OS are you targeting and/or developing on? Like Ravyne and MJP mentioned, you can use the Graphics debugger from VS2013 for Store Apps to debug desktop apps too. Depending on which platform you are targeting, you will have to jump through different hoops - but in the end you will get a usable debugger.