Learning Curve from DX9 to DX11
I have everything running in DX9. I am debating about going to DX11 sometime this year. I wanted to know what the learning curve is. I have deep knowledge of DX9, but I'm more interested in API changes that might be totally different than the way they were doing stuff in DX9.
Has anyone gone from DX9 to DX11 and how difficult was the transition?
Edit:
I know this is a broad question based on many factors, but I'm still interested in hearing anyone's general thoughts.
Thanks
Jeff.
Well, conceptually you draw the same way - you set the resources for drawing (buffers, textures) and the input layout (vertex format), and then you draw the geometry.
D3D11 does not have fixed function pipeline anymore, so you have to use shaders for vertex and pixel processing.
The shader constant registers are replaced by constant buffers, which allow for very flexible memory reads in your shaders. If you have D3D11-class hardware, you can also use read/write buffers for communicating with other shader instances and/or readback data written by the pixel shader or the compute shader.
You can still use D3D11 if you don't have the newest hardware, but the advanced features introduced by D3D11 (tessellation, rw buffers, compute shader) will simply be unavailable with older hardware.
The device has been separated into a resource factory (for "create*") and a drawing context (for "set*" and "draw*" etc.). In addition, video memory management and windowing system interoperation is provided by a separate (but tightly integrated) component called DXGI.
You can create several "deferred" drawing contexts for several threads, and then join them at the main thread when you want to submit them to the device.
These are just the tip of the iceberg, but contain the most important changes in my opinion. If you know your way well with D3D9, a little practice will surely get you going with D3D11 as well - even though the API has been overhauled completely.
The biggest issue I found was with the documentation. I need to be upfront here and confess that I haven't checked out the newer (Widows SDK) documentation, so what I say must be viewed terms of referring to the most recent (June 2010) DXSDK, and a few SDKs leading up to that one. Hopefully things have changed since then.
That documentation, in short, was fairly dreadful. Yes, there was a lot of information there, but key items were scattered across multiple (sometimes unconnected) entries, I often had to refer back to the D3D10 equivalent for crucial info, there were occasional documentation bugs, the help file indexing was all messed up (certain items weren't even in the index), and sometimes structs/enums weren't linked from the documentation page for the API call that used them.
Overall it gave the impression of assuming that you already knew D3D10 and was just giving the additional 11-specific info in any detail. Like I said, hopefully that's improved now (it had already improved immensely by June 2010 but had quite a way to go before hitting the highs of the D3D9 docs).
Rant over.
Regarding the API itself, if you're already using shaders (but no Effects), vertex declarations (i.e. no FVF codes) and vertex buffers (i.e. no Draw(Indexed)PrimitiveUP) you'll have an easier time of it. If you're using any of the D3DX helper objects (e.g Mesh, Sprite, Effect) things will go a good deal harder as you'll need to write a lot of code yourself for functionality they previously took care of. You need to be careful around constant buffers and state objects, instancing is much cleaner and easier, but dynamic buffer (and texture) updates have gone down a similar route to OpenGL in providing several different methods of updating them (with rather vague guidelines) instead of just one.
Yea, DXGI virtualizes video memory access and implements robust windowing system cooperation so "device lost" can be considered an obsolete problem on D3D11.
Some stuffs involve a bit of complications, like the the fact there is no more possibility of using a ZBuffer that is larger than the rendertarget, and reusing that depth surface for various resolutions.
Another touchy-to-get-right stuff, is the classical 0.5 texel shift in texture readings UVs. there is a list of "what-changed" from DX9 to DX10 in the SDK, that helps a lot.
I love DX10/11 because it is so neat:
- input signature : no more possible unmatch between vertex buffer layout and vertex shader input.
- uniform buffers : no more ugly state blocks
- better debug output and break on errors stuffs
- all "expensive" operations are now clearly expensive, e.g there is no more Stretch function. only CopyResource.
...