Sign in to follow this  
Medo Mex

Questions about D3D9 and D3D11

Recommended Posts

I'm now working on interface to support both D3D9 and D3D11, I have several questions:
 
1. How do I change the values of D3D11_RASTERIZER_DESC during rendering?
 
Lets say I have to switch between cull modes, usually I create two D3D11_RASTERIZER_DESC (RasterizerCullModeNone and RasterizerCullModeBack)
 
Now, the problem is that I could set RasterizerCullModeBack while I want to enable wirefire mode at the same time, other times I will want to use RasterizerCullModeBack while I don't want wirefire mode.
 
So what the is the correct way to handle that so I can freely change the values of D3D11_RASTERIZER_DESC during rendering?
 
2. How come device->CheckMultisampleQualityLevels() is inside the deivce class? usually I will need to get the values from this function before I create the device
 
Do I have to create the device twice in order to use device->CheckMultisampleQualityLevels()?
 
3. I'm using interface to support several versions of D3D, the interface "IRenderer" should handle everything related to creating buffers and drawing, now I'm trying to create the draw function, what parameters should be used for the draw function IRenderer::Draw() so the draw call should execute on either IRenderer9 or IRenderer11 (based on the selected D3D version)?
 
4. What is the most efficient way to handle things like per pixel light, fog, bump mapping in the engine?
 
Should I create one single shader file and use it to do all the effects that the engine support?
 
If Yes, what if the developer who works on the engine wants to add some effects not supported by the engine using the shader? Should I render the mesh twice in order to use both shaders (the custom one created by the developer and the game engine default shader)?
 
5. Is there is performance different between D3DXMath and XNMath? What are the major difference? What is commonly used in latest FPS games?
 

Share this post


Link to post
Share on other sites

#1: D3D11_RASTERIZER_DESC is just a descriptor for creating a ID3D11RasterizerState.  Change the values as you please and create new ID3D11RasterizerState’s, one for each combination of settings you need.

 

#2: You don’t need to know multi-sample levels until the device has been created.  If you think you do, you are wrong.  Whatever problem you are having here needs to be fixed on a higher level.  D3D11CreateDeviceAndSwapChain() is just a helper function—you should always be using D3D11CreateDevice() and IDXGIFactory::CreateSwapChain().

 

#3: This depends highly on your overall architecture and is a vastly scoped question.

 

#4: Write shaders that permutate (I use macros, which is good for individuals, but stitching is used in major companies where the shaders will be larger than Jupiter).

Once again, this is a vastly over-arching question that should be asked in its own topic.

 

#5: I don’t know.  I always use my own math libraries.

If this is XnMath, I would expect D3DXMath to be much faster.

What are the differences?  XnMath appears to be just a math library built for correctness rather than performance.  It also seems to be lacking a 4×4 matrix class.  On the other hand it is portable, whereas D3DXMath is not.

What do real companies use?  Their own math libraries.  D3DXMath is not portable.

 

 

L. Spiro

Edited by L. Spiro

Share this post


Link to post
Share on other sites
With #1, you're trying to emulate D3D9's philosophy on D3D11. It's much easier to do the opposite, and emulate D3D11`s philosophy on D3D9. Allow the user to create rasterizer states - on D3D11 this is a simple wrapper around the API, and on D3D9 you can make your own struct that contains values for all the same/separate render states.

#2 - on top of what LS pointed out above, you don't HAVE to use a multisampled swapchain; you can create a separate multisampled rendertarget, which you resolve onto the swapchain's rendertarget.
This is useful when the 3d scene should be multisampled, but the HUD/etc doesn't need it.

#3 - what does the draw function do?

#4 - my personal preference is to have no in-built shaders, and allow the user to provide and choose the shader for each draw call.

Share this post


Link to post
Share on other sites
I might highly recommend that you go a step further from D3D11 and create whole pipeline states (a combination of the different state objects in D3D11) and make them immutable. Your rendering code would then create a pipeline state for each particular permutation of rendering options it needs (which sounds huge, but in practice is not all that many).

That's not necessarily the best way to solve the problem, but it'll put you in good shape to adapt to D3D12 and GLNext when they're publicly available. I'm not sure that there's a good way to really emulate their APIs on the current ones, but keeping things "chunkier" (fewer individual objects and states) is a good start.

That applies to the pipeline states, shader stages, etc. The hardware is far less modular than the APIs make it seem and the newer APIs are more directly reflecting that. This is the same story with D3D9 -> D3D10, as the changes in hardware to chunkier states had already started, which is why D3D10/11 have a handful of state objects rather than a bunch of individual state flags and values like D3D9 did.

Resource binding is also likely going to be quite different. Try to build out a bindless model, use texture arrays as much as possible, and generally just follow the high-end D3D11 and OpenGL 4.5 recommendations for high-performance graphics. I'm not sure what the best way to emulate those in D3D9 are; you may need a CPU-side resource table and resource table slice concept that can easily handle all of bindless (D3D11.2, OpenGL 4.5), binding + arrays (D3D10/11, GL 3.x-4.x), and bindings w/o arrays (D3D9, GL 2.x), but I haven't put much time or thought into it.

Share this post


Link to post
Share on other sites

@L. Spiro: How much work should I expect to create my own Math library?

 

What do you mean by stitching shaders? do you mean that I could create variety of shader files "light.txt", "fog.txt", "blur.txt" and use C++ to gather the code together to create a single shader code based on the scene needs?

 

@Hodgman:

 

1. I'm trying to make things easier for the developer and with less code, so I created functions like EnableZBuffer() DisableZBuffer(), etc...

 

3. The draw function should draw something based on vertex buffer only or vertex and index buffer, so what parameters should I use in IRenderer::Draw() to be able to use the same draw call for both D3D9 and D3D11?

 

4. In the engine there is a class like "CLight"

 

So the user can use CLight::AddLight(); to add new light to the scene.

 

So I don't think I can let the user choose the shader to use, since every mesh MUST use the same lighting shader.

 

Another question: Can I use MRT and multisampling anti-aliasing (MSAA) at the same time in D3D11?

Edited by Medo3337

Share this post


Link to post
Share on other sites

How much work should I expect to create my own Math library?

It depends on how fast you want it to be. If fast, a lot. You can also expect a lot of bugs in the rest of your engine caused by small problems in your math routines unless you know very well what you are doing.
 

do you mean that I could create variety of shader files "light.txt", "fog.txt", "blur.txt"

No, I mean Add.hlsl, Mul.hlsl, AshikhminShirley.hlsl, OrenNayar.hlsl, etc.
 

and use C++ to gather the code together to create a single shader code based on the scene needs?

Yes. Using very long shader keys.
 

I'm trying to make things easier for the developer and with less code, so I created functions like EnableZBuffer() DisableZBuffer(), etc...

Be realistic. You are the only “developer”. Things will be easier for you if you model Direct3D 9 after Direct3D 11 and use state blocks, not individual states.
And, more realistically speaking, if you don’t use state blocks, you definitely will be the only person who ever uses whatever it is you are making.  No one is going to use an engine from 3 generations ago.
 

The draw function should draw something based on vertex buffer only or vertex and index buffer, so what parameters should I use in IRenderer::Draw() to be able to use the same draw call for both D3D9 and D3D11?

Why would you pass vertex and index buffers instead of binding them the same way you do with everything else (textures, shaders, samplers, etc.)?
What if you need to use 2 vertex buffers?
Why don’t you look at the draw commands in Direct3D 9 and Direct3D 11 and see what is the same between them and pass that?
I am talking about the number of primitives, the starting offset, etc.
 

So the user can use CLight::AddLight(); to add new light to the scene.

That’s impossible.
A scene knows about lights, not the other way around.
m_sScene->AddLight( CSharedLightPtr * _plLight );
 

So I don't think I can let the user choose the shader to use, since every mesh MUST use the same lighting shader.

Why do they have to use the same lighting shader? That doesn’t make sense, especially for a forward renderer.
 

Can I use MRT and multisampling anti-aliasing (MSAA) at the same time in D3D11?

Read the documentation.

If render targets use multisample anti-aliasing, all bound render targets and depth buffer must be the same form of multisample resource (that is, the sample counts must be the same).

 
This is different from Direct3D 9, in which case it is impossible.


L. Spiro

Share this post


Link to post
Share on other sites
Things will be easier for you if you model Direct3D 9 after Direct3D 11 and use state blocks, not individual states.

 

 

What do you mean by state blocks? do you mean structs such as: D3D11_SAMPLER_DESC and D3D11_DEPTH_STENCIL_DESC

 


Why would you pass vertex and index buffers instead of binding them the same way you do with everything else (textures, shaders, samplers, etc.)?

 

I'm binding vertex and index buffers inside Mesh class.

 


What if you need to use 2 vertex buffers?

I think the only situation I will need to use 2 or more vertex buffers is when I have a model with multiple meshes (for example: tank will have body, gun, wheels)

 

Correct me if I'm wrong. 

 

Why don’t you look at the draw commands in Direct3D 9 and Direct3D 11 and see what is the same between them and pass that?

 

 

I already did, however a suggestion for the correct IRenderer::Draw() parameters that can support different versions of D3D would be appreciated.

 


That’s impossible.
A scene knows about lights, not the other way around.
m_sScene->AddLight( CSharedLightPtr * _plLight );
 

What I mean is that I use CLight to add lights to the lighting class, then the scene should use CLight class to send the lights to the shader.

 


Why do they have to use the same lighting shader? That doesn’t make sense, especially for a forward renderer.
 
Do you mean that I should have a shader for directional light and anothor shader for point light, etc...?
 


#4: Write shaders that permutate (I use macros, which is good for individuals, but stitching is used in major companies where the shaders will be larger than Jupiter).
Once again, this is a vastly over-arching question that should be asked in its own topic.

 

Any examples for using macros?

Edited by Medo3337

Share this post


Link to post
Share on other sites

What do you mean by state blocks? do you mean structs such as: D3D11_SAMPLER_DESC and D3D11_DEPTH_STENCIL_DESC

As I said. Those are structures for creating state blocks.  ID3D11RasterizerState represents the raster state block.

I think the only situation I will need to use 2 or more vertex buffers is when I have a model with multiple meshes (for example: tank will have body, gun, wheels)
 
Correct me if I'm wrong.

The purpose of multiple vertex streams running at once is not to draw multiple models at once. It is to separate attributes of a single draw call across multiple vertex buffers, such that buffer 0 has position and UV coordinates, while buffer 1 has normal, bitangent, and tangent (as one possible example).

Do you mean that I should have a shader for directional light and anothor shader for point light, etc...?

No. Every shader should run through every light that is set (be them directional lights, point lights, whatever).
Why can’t you draw a sidewalk with Oren-Nayar and a car with Ashikhmin-Shirley?
Why can’t the sidewalk use 1 shader and the car paint use another shader? It doesn’t make sense that they need to use the same shading routine, or even the same shader.


Any examples for using macros?

http://www.gamedev.net/topic/630483-best-way-to-organize-hlsl-code-in-dx11/#entry4974836


L. Spiro

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this