Questions about D3D9 and D3D11
#1: D3D11_RASTERIZER_DESC is just a descriptor for creating a ID3D11RasterizerState. Change the values as you please and create new ID3D11RasterizerState’s, one for each combination of settings you need.
#2: You don’t need to know multi-sample levels until the device has been created. If you think you do, you are wrong. Whatever problem you are having here needs to be fixed on a higher level. D3D11CreateDeviceAndSwapChain() is just a helper function—you should always be using D3D11CreateDevice() and IDXGIFactory::CreateSwapChain().
#3: This depends highly on your overall architecture and is a vastly scoped question.
#4: Write shaders that permutate (I use macros, which is good for individuals, but stitching is used in major companies where the shaders will be larger than Jupiter).
Once again, this is a vastly over-arching question that should be asked in its own topic.
#5: I don’t know. I always use my own math libraries.
If this is XnMath, I would expect D3DXMath to be much faster.
What are the differences? XnMath appears to be just a math library built for correctness rather than performance. It also seems to be lacking a 4×4 matrix class. On the other hand it is portable, whereas D3DXMath is not.
What do real companies use? Their own math libraries. D3DXMath is not portable.
L. Spiro
#2 - on top of what LS pointed out above, you don't HAVE to use a multisampled swapchain; you can create a separate multisampled rendertarget, which you resolve onto the swapchain's rendertarget.
This is useful when the 3d scene should be multisampled, but the HUD/etc doesn't need it.
#3 - what does the draw function do?
#4 - my personal preference is to have no in-built shaders, and allow the user to provide and choose the shader for each draw call.
That's not necessarily the best way to solve the problem, but it'll put you in good shape to adapt to D3D12 and GLNext when they're publicly available. I'm not sure that there's a good way to really emulate their APIs on the current ones, but keeping things "chunkier" (fewer individual objects and states) is a good start.
That applies to the pipeline states, shader stages, etc. The hardware is far less modular than the APIs make it seem and the newer APIs are more directly reflecting that. This is the same story with D3D9 -> D3D10, as the changes in hardware to chunkier states had already started, which is why D3D10/11 have a handful of state objects rather than a bunch of individual state flags and values like D3D9 did.
Resource binding is also likely going to be quite different. Try to build out a bindless model, use texture arrays as much as possible, and generally just follow the high-end D3D11 and OpenGL 4.5 recommendations for high-performance graphics. I'm not sure what the best way to emulate those in D3D9 are; you may need a CPU-side resource table and resource table slice concept that can easily handle all of bindless (D3D11.2, OpenGL 4.5), binding + arrays (D3D10/11, GL 3.x-4.x), and bindings w/o arrays (D3D9, GL 2.x), but I haven't put much time or thought into it.
@L. Spiro: How much work should I expect to create my own Math library?
What do you mean by stitching shaders? do you mean that I could create variety of shader files "light.txt", "fog.txt", "blur.txt" and use C++ to gather the code together to create a single shader code based on the scene needs?
@Hodgman:
1. I'm trying to make things easier for the developer and with less code, so I created functions like EnableZBuffer() DisableZBuffer(), etc...
3. The draw function should draw something based on vertex buffer only or vertex and index buffer, so what parameters should I use in IRenderer::Draw() to be able to use the same draw call for both D3D9 and D3D11?
4. In the engine there is a class like "CLight"
So the user can use CLight::AddLight(); to add new light to the scene.
So I don't think I can let the user choose the shader to use, since every mesh MUST use the same lighting shader.
Another question: Can I use MRT and multisampling anti-aliasing (MSAA) at the same time in D3D11?
It depends on how fast you want it to be. If fast, a lot. You can also expect a lot of bugs in the rest of your engine caused by small problems in your math routines unless you know very well what you are doing.How much work should I expect to create my own Math library?
No, I mean Add.hlsl, Mul.hlsl, AshikhminShirley.hlsl, OrenNayar.hlsl, etc.do you mean that I could create variety of shader files "light.txt", "fog.txt", "blur.txt"
Yes. Using very long shader keys.and use C++ to gather the code together to create a single shader code based on the scene needs?
Be realistic. You are the only “developer”. Things will be easier for you if you model Direct3D 9 after Direct3D 11 and use state blocks, not individual states.I'm trying to make things easier for the developer and with less code, so I created functions like EnableZBuffer() DisableZBuffer(), etc...
And, more realistically speaking, if you don’t use state blocks, you definitely will be the only person who ever uses whatever it is you are making. No one is going to use an engine from 3 generations ago.
Why would you pass vertex and index buffers instead of binding them the same way you do with everything else (textures, shaders, samplers, etc.)?The draw function should draw something based on vertex buffer only or vertex and index buffer, so what parameters should I use in IRenderer::Draw() to be able to use the same draw call for both D3D9 and D3D11?
What if you need to use 2 vertex buffers?
Why don’t you look at the draw commands in Direct3D 9 and Direct3D 11 and see what is the same between them and pass that?
I am talking about the number of primitives, the starting offset, etc.
That’s impossible.So the user can use CLight::AddLight(); to add new light to the scene.
A scene knows about lights, not the other way around.
m_sScene->AddLight( CSharedLightPtr * _plLight );
Why do they have to use the same lighting shader? That doesn’t make sense, especially for a forward renderer.So I don't think I can let the user choose the shader to use, since every mesh MUST use the same lighting shader.
Read the documentation.Can I use MRT and multisampling anti-aliasing (MSAA) at the same time in D3D11?
If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same).
This is different from Direct3D 9, in which case it is impossible.
L. Spiro
Things will be easier for you if you model Direct3D 9 after Direct3D 11 and use state blocks, not individual states.
What do you mean by state blocks? do you mean structs such as: D3D11_SAMPLER_DESC and D3D11_DEPTH_STENCIL_DESC
Why would you pass vertex and index buffers instead of binding them the same way you do with everything else (textures, shaders, samplers, etc.)?
I'm binding vertex and index buffers inside Mesh class.
What if you need to use 2 vertex buffers?
I think the only situation I will need to use 2 or more vertex buffers is when I have a model with multiple meshes (for example: tank will have body, gun, wheels)
Correct me if I'm wrong.
Why don’t you look at the draw commands in Direct3D 9 and Direct3D 11 and see what is the same between them and pass that?
I already did, however a suggestion for the correct IRenderer::Draw() parameters that can support different versions of D3D would be appreciated.
That’s impossible.
A scene knows about lights, not the other way around.
m_sScene->AddLight( CSharedLightPtr * _plLight );
What I mean is that I use CLight to add lights to the lighting class, then the scene should use CLight class to send the lights to the shader.
Why do they have to use the same lighting shader? That doesn’t make sense, especially for a forward renderer.
#4: Write shaders that permutate (I use macros, which is good for individuals, but stitching is used in major companies where the shaders will be larger than Jupiter).
Once again, this is a vastly over-arching question that should be asked in its own topic.
Any examples for using macros?
As I said. Those are structures for creating state blocks. ID3D11RasterizerState represents the raster state block.What do you mean by state blocks? do you mean structs such as: D3D11_SAMPLER_DESC and D3D11_DEPTH_STENCIL_DESC
The purpose of multiple vertex streams running at once is not to draw multiple models at once. It is to separate attributes of a single draw call across multiple vertex buffers, such that buffer 0 has position and UV coordinates, while buffer 1 has normal, bitangent, and tangent (as one possible example).I think the only situation I will need to use 2 or more vertex buffers is when I have a model with multiple meshes (for example: tank will have body, gun, wheels)
Correct me if I'm wrong.
No. Every shader should run through every light that is set (be them directional lights, point lights, whatever).Do you mean that I should have a shader for directional light and anothor shader for point light, etc...?
Why can’t you draw a sidewalk with Oren-Nayar and a car with Ashikhmin-Shirley?
Why can’t the sidewalk use 1 shader and the car paint use another shader? It doesn’t make sense that they need to use the same shading routine, or even the same shader.
http://www.gamedev.net/topic/630483-best-way-to-organize-hlsl-code-in-dx11/#entry4974836Any examples for using macros?
L. Spiro