Jump to content
  • Advertisement
Sign in to follow this  
PointMan

DX11 [SlimDX] D3D11 From D3D9

This topic is 2600 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

A couple years ago I built a procedural terrain generation engine/app in SlimDX. Back then I did it in D3D9. I've since returned to the project and decided to get the latest from SlimDX, and give it a go with D3D11 - and not surprisingly I'm having issues. I'm completely new to D3D11 and - even though D3D10 was around back then - I'm unfamiliar with it as well.

So far, from following tutorials, I can create some triangles. I probably know enough to manipulate them a bit with shaders - but that's about it.


I will list some concepts and class/keywords for which I'm having trouble porting/upgrading - if you can provide help or links to documentation, that would be greatly appreciated.

1. Camera - I have found next to no documentation on camera implementation in SlimDX11. I can still use my old camera class which basically just holds the vector3s representing things like position and target. I can even manipulate these values from keyboard/mouse input. The problem is how to actually affect the D3D11 camera. In D3D9, I would initialize my camera (projection/view) using:


transformMatrix = Matrix.PerspectiveFovLH((float)Math.PI / 4, this.Width / this.Height, 0.001f, 100000f);

device.SetTransform(TransformState.Projection, transformMatrix);


For D3D11 there is no device.SetTranform(), and I cannot find any SetTransform() anywhere. Every tutorial/sample with SlimDX (11) uses a static camera.

2. Device.SetRenderState() - In D3D9 I used this to set several values, including lighting, cullmode, primitive fill mode. Any ideas on D3D11 implementation?

3. VertexFormat - This is from D3D9 too. I used a custom vertex struct, allowing me to hold a position, colour (diffuse), and texture. Since all my vertices were of this type, I'd then inform the D3D9 Device using:

device.VertexFormat = MyCustomVertex.Format;

which actually came from:



public static VertexFormat Format
{
get {
return VertexFormat.PositionBlend1 | VertexFormat.Diffuse | VertexFormat.Texture1;
}
}




Thanks for any help anyone can provide. Maybe my Google-Fu is just weak and rusty, but I've had real trouble finding answers for this stuff.

Share this post


Link to post
Share on other sites
Advertisement
1) The most important difference to know is that Direct3D 10 and 11 lack the fixed function pipeline. That means that many things that were previously done for you have to be done in shaders. You've encountered the first example of this: you have to pass the matrices to your shaders.

2) Another difference is that states are now set in bunches. So for example, you set all of the rasterizer states at once with Device.Rasterizer.State. Lighting is another fixed function pipeline feature that you need to implement in shaders

3) Look up InputElement.

I'm working with Direc3D 10 for the first time, and I love it. However, you're going to find SlimDX documentation and samples for it limited. Fortunately, it's not so bad because the SlimDX api is deliberately close to the native API, so it's trivial to look at c++ code.

Share this post


Link to post
Share on other sites
Thanks for the info regarding the camera.

Can anyone offer any guides for camera-control using shaders?

As for your answer to item 2 - it looks like Device.Rasterizer.State is implemented in D3D11 using Device.ImmediateContext.Rasterizer.State. I haven't tested it yet, but I've done my previous code (except the lighting) using:


RasterizerStateDescription rsd = new RasterizerStateDescription();
rsd.FillMode = FillMode.Solid;
rsd.CullMode = CullMode.Back;
device.ImmediateContext.Rasterizer.State = RasterizerState.FromDescription(device, rsd);


I'm finding some things regarding InputElement, and it looks like it's probably what I want. That said, the arguments seem too 'loose' to me - not strongly typed they way I'd expect. As I showed, in D3D9, I used things like VertextFormat.Position, whereas in D3D11 it looks like it's just a string:

InputElement ie = new InputElement("POSITION",0,Format.R32G32B32_Float,InputElement.AppendAligned,0,InputClassification.PerVertexData,0);

Is that really the implementation? Once I have my InputElement(s), how do I inform device/context?

Yes, I had the impression that your final point was the case - that SlimDX now more closely resembles the native C++ API. Unfortunately, that is frustrating for me since the whole reason I went MDX/SlimDX route was to get an easier and simpler API. Oh well - maybe I'll learn more this way.

Thanks for the help so far!

Share this post


Link to post
Share on other sites

Is that really the implementation? Once I have my InputElement(s), how do I inform device/context?

Yes, I had the impression that your final point was the case - that SlimDX now more closely resembles the native C++ API. Unfortunately, that is frustrating for me since the whole reason I went MDX/SlimDX route was to get an easier and simpler API. Oh well - maybe I'll learn more this way.

Thanks for the help so far!

Look at InputLayout creation...

But as you gonna have a bunch of questions like this, It would be easier for you to buy a direct3d10 book (Frank D Luna - Introduction to Game Programming with DirectX 10) or direct3d11 one (Practical Rendering and Computation with Direct3D 11 - Matt Pettineo, Jason Zink, Jack Hoxley) that will be worth the investment.
Although the Luna's book is quite old, It is probably the easiest one to get in for a beginner or someone coming from d3d9.

Concerning the easiness of a managed DirectX API, SlimDX as well as SharpDX are not intended to be a higher level API than DirectX C++ API. They are just wrapping things in the philosophy of .Net Languages (using typed array instead of pointers, check for HRESULT, provides overloaded methods, provides enums where only C++ defines were available, group functions with their respective types...etc.), but you are still required to understand how to use correctly the underlying C++ API. And most of the problems you are referring are not related to the managed API but the gap between Direct3D9 and Direct3D10/11 (removed of fixed pipeline functions, difference in using resources with resource views, binding vertex buffers... etc.)

Still you get a far easier API using a managed one, and your code will look much concise and cleaner than their C++ counterpart. In the end, you will appreciate using Direct3D11 API, though at the cost of implementing some stuff hidden from previous Direct3D APIs.

Share this post


Link to post
Share on other sites
"Camera" is a high-level abstraction, and as such D3D has no concept of it. All D3D cares about is that you transform vertices into some projection coordinate space, and then it rasterizes triangles for you.

That said, handling a camera really doesn't have to be complicated. You already know how to make a projection matrix using utility functions, so that part doesn't have to change. So all you need is a view matrix, which transforms coordinates from world space to camera-relative space (AKA view space). It's possible to make this using a "LookAt" utility function, or you can make one yourself from transform data. Basically you just keep track of where your camera is located (it's translation) and how it's rotated (it's orientation), the same way you would keep track of this for any other object in your world. Then you just create a 4x4 transformation matrix representing this rotation + translation, and invert it. The inverted matrix is then your view matrix for that camera.

As for input elements, there are no longer any hard-defined input semantics. Since you have to use shaders now, the semantics are completely aribitrary since your shader has to manually interpret the input elements anyway. You use these input elements when your create an input layout. An input layout is a mapping between your input elements in a vertex buffer, and the input parameters taken by your vertex shader. So typically you make one when you assign a vertex shader to some mesh/object.

Share this post


Link to post
Share on other sites

"Camera" is a high-level abstraction, and as such D3D has no concept of it. All D3D cares about is that you transform vertices into some projection coordinate space, and then it rasterizes triangles for you.

That said, handling a camera really doesn't have to be complicated. You already know how to make a projection matrix using utility functions, so that part doesn't have to change. So all you need is a view matrix, which transforms coordinates from world space to camera-relative space (AKA view space). It's possible to make this using a "LookAt" utility function, or you can make one yourself from transform data. Basically you just keep track of where your camera is located (it's translation) and how it's rotated (it's orientation), the same way you would keep track of this for any other object in your world. Then you just create a 4x4 transformation matrix representing this rotation + translation, and invert it. The inverted matrix is then your view matrix for that camera.

As for input elements, there are no longer any hard-defined input semantics. Since you have to use shaders now, the semantics are completely aribitrary since your shader has to manually interpret the input elements anyway. You use these input elements when your create an input layout. An input layout is a mapping between your input elements in a vertex buffer, and the input parameters taken by your vertex shader. So typically you make one when you assign a vertex shader to some mesh/object.


Right - like I said, I can handle the matrix math regarding the camera position and direction - my issue is how to affect the actual Device/Context. What is the new way of doing:

device.SetTransform(TransformState.View, Matrix.LookAtLH(camera.GetPosition(), camera.GetTarget(), camera.GetUp()));

I have my camera/lookat function (that you mention) working. Just wondering how to actually USE it. It sounds like I have to use shaders? But I have no idea how to do that (affect camera position/direction with shaders).



Share this post


Link to post
Share on other sites
Regarding camera controls: You apply matrices to your shader with code along the lines of DXEffect.GetVariableByName(name).AsMatrix().SetMatrix(someMatrix); . In your shader code, you have a matrix defined with a certain name, and that applies the value you want to it. I can't really go into a lot more detail, it sounds as if you really need to look up some tutorials. Shaders are somewhat complicated to get into at first, but god, I love them. The fine degree of control they give over everything is amazing.

Regarding your comment about Device.ImmediateContext.Rasterizer.State, I haven't used Direct3D11 yet so can't help you with that one. The API of Direct3D10 and 11 are pretty much identical, but 11 lacks some high level features I don't feel like implementing, and I don't have a directx 11 capable video card anyways.

Here's my recommendation: check out XNA if you haven't already. It sounds right up your alley. It has the simpler .net-esque api you want and is a LOT quicker to get into. It still requires that you use shaders, but it provides a basic one with lighting effects and such, so you won't need to worry about learning much about them right away. I avoid XNA nowadays because I always feel it's forcing me into a particular design paradigm and I prefer having a finer degree of control, but these are personal preferences and I otherwise highly recommend it.

Share this post


Link to post
Share on other sites
Yeah everything has to be done with shaders, no exceptions. The primary means for sending parameters like a view matrix to a shader are through constant buffers. On the CPU side of things a constant buffer is just an ID3D11Buffer that you create with the D3D11_BIND_CONSTANT_BUFFER flag (typically you also create it with D3D11_USAGE_DYNAMIC, since you usually modify its contents every frame). Then you Map (In D3D10/D3D11 Locking is a resource is now called Mapping) that buffer, and fill its contents with constant data much like you would with a vertex buffer. On the shader side of things, you declare a cbuffer and list the variables contained inside it. Then in your shader, you can just access the individual variables in the constant buffer as if they were globals. In order for shader to get the data properly, you need to ensure that on the CPU side the offsets of each parameter within the buffer matches what you declared in the shader, while observing the rules that HLSL uses for packing variables within a constant buffer. You also have to bind the constant buffer to the context for the shader stage that you intend to use it in, making sure that you specify the right slot (slots can be manually assigned in HLSL, or the compiler will automatically assign them and you can query the slot through the reflection API's).

If you decide to use the Effects framework, it can simplify some of this for you. Basically you just declare the constants in your shader, and the framework will handle creating the constant buffer for you. Then when you want to set each parameter, you can do so by name and the framework will handle the mapping and binding for you.

Either way, I would suggest looking through the basic tutorials that come with the DX SDK. They will demonstrate these basic ideas, so that you can get started with shaders and setting up the pipeline.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!