• Advertisement
Sign in to follow this  

DX11 Supporting Multiple DirectX Versions

This topic is 1121 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I have interface IRenderer, two classes are derived from this interface IDX9Renderer and IDX11Renderer.

 

Now, I'm trying to be able to support either DX9 or DX11 in my engine

 

So my question is: How IRenderer interface structure should look like?

 

For example:

IRenderer::createVertexBuffer(...);
IRenderer::draw(...);

What to put in the above methods as a parameter so I can use it in both classes IDX9Renderer and IDX11Renderer?

Edited by Medo3337

Share this post


Link to post
Share on other sites
Advertisement
If you want to follow this path, you need to create functions in the renderer class that apply to both the dx9 and dx11 inherited classes. Everything dx9/dx11 specific you can add (members and functions) to the inherited classes.

Just out of curiosity, why go though this at all? Since all gpu's from 3 years old and younger (by head) support dx11. Plus windows xp isn't supported anymore (and I believe not often used among gamers). Of course if it's just for practice then it's another case.

I created an engine which I've been working on for years using dx9. But it's set up quite modular and flexible so later on I can "update" it to dx11.

Share this post


Link to post
Share on other sites

@cozzie: The reason why is that I want to be able to add any future version of DirectX to the engine.

 

I'm still not sure what the correct rendering interface IRenderer should look like.

Share this post


Link to post
Share on other sites
class IRenderer
{
// generic renderer stuff
}

class IDirectX9Renderer : public IRenderer
{
// specific dx9 stuff
}

If this is not what you're looking for, then maybe can you be a bit more specific.

Share this post


Link to post
Share on other sites

@cozzie: That's already what I have, now I'm not sure what the correct parameters that I should put here so I can support multiple version of DX:

IRenderer::createVertexBuffer(...);
IRenderer::createIndexBuffer(...);
IRenderer::draw(...);

Share this post


Link to post
Share on other sites

 

@cozzie: That's already what I have, now I'm not sure what the correct parameters that I should put here so I can support multiple version of DX:

IRenderer::createVertexBuffer(...);
IRenderer::createIndexBuffer(...);
IRenderer::draw(...);

I would say that you should start by doing a low level abstraction of the one you want your interface to emulate.

 

Right now I have a "ResourceCreator" class that you can get after you load the library(DLL), This in tern is used to create any type of resource that is supported.

 

An example of my index buffer's create method(I do need to make alowance for 2byte indices):

/*! This creates the vertex buffer with the specified number of elements and in the specified memory pool.
*
*	Parameters:
*		Elements	: The number of elements of length of sizeof( pIndices[ 0 ] ).
*		pIndices	: A pointer to the index data to use for creation.
*		Topology	: How the elements will be use to create the rendering surface.
*		Usage		: The usage pattern to apply to this resource.
*		CPUAccess	: What access should the CPU have to this resource.
*/
DLLEXPORT virtual HRESULT Create( const unsigned Elements, const unsigned* pIndices,
								  const PrimitiveTopology Topology = PrimitiveTopology::TriangleList,
								  const UsageType Usage = UsageType::Immutable,
								  const CPUAccess CPUAccess = CPUAccess::None ) = NOT_IMPLEMENTED;


And the vertex buffer:

/*! This creates the vertex buffer with the specified number of elements.
*
*	Parameters:
*		Elements	: The number of elements of length of Description.length().
*		Description	: The layout of each element in memory.
*		pVertices	: A pointer to the vertex data to use for creation.
*		Usage		: The usage pattern to apply to this resource.
*		CPUAccess	: What access should the CPU have to this resource.
*/
DLLEXPORT virtual HRESULT Create( const unsigned Elements, const Vertex::Layout Description,
								  const void* pVertices, UsageType Usage = UsageType::Immutable,
								  const CPUAccess CPUAccess = CPUAccess::None ) = NOT_IMPLEMENTED;

HTH

Ryan.

Share this post


Link to post
Share on other sites

What is the purpose of DX9 support? If it is for old hardware instead of Windows XP you can use Direct3D feature levels to target SM 2.x and SM 3.0 GPUs (keep in mind that you will unable to use SM 3.0 features) https://msdn.microsoft.com/en-us/library/ff476876.aspx

 

Otherwise as suggested you should write different rendering path for D3D9 and D3D11 (write you own graphics API could be really challenging since DX9 and DX11 are really different in features).

Edited by Alessio1989

Share this post


Link to post
Share on other sites


Just out of curiosity, why go though this at all? Since all gpu's from 3 years old and younger (by head) support dx11. Plus windows xp isn't supported anymore (and I believe not often used among gamers). Of course if it's just for practice then it's another case.

 


What is the purpose of DX9 support?

 

As of Dezember 2014, Windows XP still holds 18,26% of market share according to this site:

 

http://www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0

 

I don't know how much of those 18,26% are gamers, but it based on those numbers it could still be viable to support XP to get an increased userbase.

Edited by Juliean

Share this post


Link to post
Share on other sites

I don't know how much of those 18,26% are gamers

 

I'm not sure I'd trust these figures at all for gaming metrics, as it is based on Web browser useragent strings which are not relevant to what we are doing.

 

On the other hand steam records much better statistics of operating system usage: http://store.steampowered.com/hwsurvey

 

Edit: this shows about 4% XP users, 32 and 64 bit combined...

Edited by braindigitalis

Share this post


Link to post
Share on other sites

Right now there is a problem:

 

vs_4_0 is working in DX11 but NOT working in DX9

vs_3_0 is NOT working in DX11 but working in DX9

 

NOT working means HRESULT (E_FAIL) When calling Device::CreateVertexBuffer()

 

Why?

Share this post


Link to post
Share on other sites

Right now there is a problem:

 

vs_4_0 is working in DX11 but NOT working in DX9

vs_3_0 is NOT working in DX11 but working in DX9

 

NOT working means HRESULT (E_FAIL) When calling Device::CreateVertexBuffer()

 

Why?

Those are Vertex Shader versions, I do not see how that is affecting the creation of a Vertex Buffer.

 

Can you show the some code on how you are creating the resources?

 

If you have further issues I can post the relevent (sections of) code that I use.

 

HTH

Ryan.

Share this post


Link to post
Share on other sites

@ryan20fun: The following doesn't work, if I changed LPCSTR pProfile parameter from vs_3_0 to vs_4_0 it will work

 ID3D10Blob *VS, *PS;
 D3DX11CompileFromFile("shader.fx", 0, 0, "VS", "vs_3_0", 0, 0, 0, &VS, 0, 0);
 D3DX11CompileFromFile("shader.fx", 0, 0, "PS", "ps_3_0", 0, 0, 0, &PS, 0, 0);

 device->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS);
 device->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS);

Share this post


Link to post
Share on other sites

 

@ryan20fun: The following doesn't work, if I changed LPCSTR pProfile parameter from vs_3_0 to vs_4_0 it will work

 ID3D10Blob *VS, *PS;
 D3DX11CompileFromFile("shader.fx", 0, 0, "VS", "vs_3_0", 0, 0, 0, &VS, 0, 0);
 D3DX11CompileFromFile("shader.fx", 0, 0, "PS", "ps_3_0", 0, 0, 0, &PS, 0, 0);

 device->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS);
 device->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS);

Hmm, I think it has to do with feature level( I have never tried using vs_3_0 before).

What happens if you use downleveled feature level like D3D_FEATURE_LEVEL_9_3?

 

Would it hurt to just use vs_4_0 when using D3D10+ ?

 

Unfortunetly I only used the Fixed Funciton Pipeline with D3D9, So I can not help you with any pointers with making a interface that would require minimal changes to use D3D9.

 

HTH

Ryan.

Share this post


Link to post
Share on other sites

I guess D3D11 doesn't support vs_3_0 / ps_3_0.

 

There is something I'm not sure if I'm doing correctly.

 

For vertex buffers, D3D9 uses IDirect3DVertexBuffer9 while D3D11 uses ID3D11Buffer

 

What is the correct way to create a struct that handle both so I can use it in IRenderer?

 

Here is what I have right now:

struct EngineVertexBuffer
{
       ID3D11Buffer *D3D9_VB;
       IDirect3DVertexBuffer9 *D3D11_VB;
};

Share this post


Link to post
Share on other sites

@ryan20fun: The following doesn't work, if I changed LPCSTR pProfile parameter from vs_3_0 to vs_4_0 it will work

 ID3D10Blob *VS, *PS;
 D3DX11CompileFromFile("shader.fx", 0, 0, "VS", "vs_3_0", 0, 0, 0, &VS, 0, 0);
 D3DX11CompileFromFile("shader.fx", 0, 0, "PS", "ps_3_0", 0, 0, 0, &PS, 0, 0);

 device->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS);
 device->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS);

 
 

 

@ryan20fun: The following doesn't work, if I changed LPCSTR pProfile parameter from vs_3_0 to vs_4_0 it will work

 ID3D10Blob *VS, *PS;
 D3DX11CompileFromFile("shader.fx", 0, 0, "VS", "vs_3_0", 0, 0, 0, &VS, 0, 0);
 D3DX11CompileFromFile("shader.fx", 0, 0, "PS", "ps_3_0", 0, 0, 0, &PS, 0, 0);

 device->CreateVertexShader(VS->GetBufferPointer(), VS->GetBufferSize(), NULL, &pVS);
 device->CreatePixelShader(PS->GetBufferPointer(), PS->GetBufferSize(), NULL, &pPS);
Hmm, I think it has to do with feature level( I have never tried using vs_3_0 before).
What happens if you use downleveled feature level like D3D_FEATURE_LEVEL_9_3?
 
Would it hurt to just use vs_4_0 when using D3D10+ ?
 
Unfortunetly I only used the Fixed Funciton Pipeline with D3D9, So I can not help you with any pointers with making a interface that would require minimal changes to use D3D9.
 
HTH
Ryan.

 


D3D_FEATURE_LEVEL_9_3 does NOT support shader model 3, only SM 2.x.
 
Here are the correct shader compiler targets for Direct3D10Level9 feature levels (applies to DX10.1 and DX11.x, and I can guess the upcoming DX12): https://msdn.microsoft.com/en-us/library/jj215820.aspx#direct3d_9.1__9.2__and_9.3_feature_levels

Edited by Alessio1989

Share this post


Link to post
Share on other sites

 

For vertex buffers, D3D9 uses IDirect3DVertexBuffer9 while D3D11 uses ID3D11Buffer

 

What is the correct way to create a struct that handle both so I can use it in IRenderer?

 

Here is what I have right now:

struct EngineVertexBuffer
{
      ID3D11Buffer *D3D9_VB;
      IDirect3DVertexBuffer9 *D3D11_VB;
};

 

I would use two different implementations, But if you want to have one implementation to hold both versions for some reason then I would use a union and some type of global variable denotating which one to use.

 

So it would be:

class VertexBuffer : public GraphicsResource
{
  public:
     void*           GetObject() const;
     SomeVariable    GetVersion() const;

  private:
     union
     {
          ID3D11Buffer*             VB11;
          IDirect3DVertexBuffer9*   VB9;
     } u;
     SomeVariable     m_DirectXVersion;
}

HTH

Ryan.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By turanszkij
      Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window.
      But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that?
      Anyone with some experience in this?
    • By osiris_dev
      Hello!
      Have a problem with reflection shader for D3D11:
      1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
      I tried to add this:
      #include <D3D11Shader.h>
      #include <D3Dcompiler.h>
      #include <D3DCompiler.inl>
      #pragma comment(lib, "D3DCompiler.lib")
      //#pragma comment(lib, "D3DCompiler_47.lib")
      As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
      I also find this article http://mattfife.com/?p=470
      where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?
    • By trojanfoe
      Hi there, this is my first post in what looks to be a very interesting forum.
      I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering?
      Thanks in advance!
    • By Matt_Aufderheide
      I am trying to draw a screen-aligned quad with arbitrary sizes.
       
      currently I just send 4 vertices to the vertex shader like so:
      pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
      pDevCon->Draw(4, 0);
       
      then in the vertex shader I am doing this:
      float4 main(uint vI : SV_VERTEXID) : SV_POSITION
      {
       float2 texcoord = float2(vI & 1, vI >> 1);
      return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
      }
      that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
      one thing I tried is: 
       
      float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);
       
      .. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
      this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?
       
      .
    • By Stewie.G
      Hi,
      I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
      I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
      Here are my passes:
      RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } }  
      D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

       
      PS_BlurV

      PS_BlurH

      P0 + P1

      As you can see, it does not work at all.
      I think the issue is in my BlendState, but I am not sure.
      I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)
       
      Thanks!
  • Advertisement