Sign in to follow this  
SeraphLance

DX11 DX11 on a DX10-class GPU -- How far can you go?

Recommended Posts

I'd like to be able to start a project in DirectX 11 to learn the API, but currently my GPU (GTS 250) only supports DirectX 10.  I know that 11 has some level of backwards compatibility, and I plan to get a new Graphics card within the next month or two.  I'd like to be able to get something down in the meantime with DirectX 10 features and just add the 11 stuff when I pick up a new GPU, but is this really practicable, or are the DX10(11) and DX11 code paths largely independent?

Share this post


Link to post
Share on other sites

DX11 is just a superset of DX10 so yes, there should be no problems :)

If you choose to switch to DX11 further on all you need to do is switch a single flag; all your old "DX10 feature level" code will run without any need to change it.

Share this post


Link to post
Share on other sites

you won't be able to use tesselation. as far as i know, i think thats the only thing that d3d11 can actually do that d3d10 can't. I don't know if it works for tesselation, but when you create the device, you can set the device up as D3D_DRIVER_TYPE_SOFTWARE, which should emulate the hardware in software. its really slow to use this, but you should be able to use everything directx has to offer even if your gpu doesn't support it. if you create a device like this, everything is done on the cpu, so its really slow

Edited by iedoc

Share this post


Link to post
Share on other sites
D3D_DRIVER_TYPE_SOFTWARE is for 3rd party software implementation (the DLL is linked in the _In_ HMODULE Software parameter when you create the device).
 
D3D_DRIVER_TYPE_REFERENCE and D3D_DRIVER_TYPE_WARP provide a software implementation without the need of 3rd party software. The first is the slowest but implements all D3D features (with some limitation, like low filtering levels), the second is SIMD accelerated but provides more limitations. http://msdn.microsoft.com/en-us/library/ff476328.aspx

Share this post


Link to post
Share on other sites

I mentioned using a software driver type because i wasn't for sure if you'd need a 3rd party driver for using a reference driver or if directx comes with one. i've never used either one, and just know they exist. You should be able to use a software driver type which does not need to reference a driver

 

EDIT: nevermind, your right, i should have just said to use a reference driver in the first place

Edited by iedoc

Share this post


Link to post
Share on other sites

Other things you won't be able to use with feature level 10:

 

  1. 'Real' compute shaders (D3D11 supports a very limited version compute shaders for 10-level devices, but it's mostly useless)
  2. Read-only depth buffers
  3. Expanded MSAA support (reading from MSAA depth buffers, per-sample pixel shaders, SV_Coverage as input, interpolate to sample position)
  4. Cubemap arrays
  5. Append/Consume buffers
  6. UAV's from the pixel shader
Edited by MJP

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627769
    • Total Posts
      2979000
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to get some advice on what everyone thinks of this debugger, I've been getting some strange results from testing my code and I wanted to see if anyone else had an issues.
      For instance, I added three "ClearRenderTargetView" calls and three "Draw full screen quad" calls and my reported fps became a fifth of what it usually was. Thank you.
    • By schneckerstein
      Hello,
      I manged so far to implement NVIDIA's NDF-Filtering at a basic level (the paper can be found here). Here is my code so far:
      //... // project the half vector on the normal (?) float3 hppWS = halfVector / dot(halfVector, geometricNormal) float2 hpp = float2(dot(hppWS, wTangent), dot(hppWS, wBitangent)); // compute the pixel footprint float2x2 dhduv = float2x2(ddx(hpp), ddy(hpp)); // compute the rectangular area of the pixel footprint float2 rectFp = min((abs(dhduv[0]) + abs(dhduv[1])) * 0.5, 0.3); // map the area to ggx roughness float2 covMx = rectFp * rectFp * 2; roughness = sqrt(roughness * roughness + covMx); //... Now I want combine this with LEAN mapping as state in Chapter 5.5 of the NDF paper.
      But I struggle to understand what theses sections actually means in Code: 
      I suppose the first-order moments are the B coefficent of the LEAN map, however things like
      float3 hppWS = halfVector / dot(halfVector, float3(lean_B, 0)); doesn't bring up anything usefull.
      Next theres:
      This simply means:
      // M and B are the coefficents from the LEAN map float2x2 sigma_mat = float2x2( M.x - B.x * B.x, M.z - B.x * B.y, M.z - B.x * B.y, M.y - B.y * B.y); does it?
      Finally:
      This is the part confuses me the most: how am I suppose to convolute two matrices? I know the concept of convolution in terms of functions, not matrices. Should I multiple them? That didn't make any usefully output.
      I hope someone can help with this maybe too specific question, I'm really despaired to make this work and i've spend too many hours of trial & error...
      Cheers,
      Julian
    • By Baemz
      Hello,
      I've been working on some culling-techniques for a project. We've built our own engine so pretty much everything is built from scratch. I've set up a frustum with the following code, assuming that the FOV is 90 degrees.
      float angle = CU::ToRadians(45.f); Plane<float> nearPlane(Vector3<float>(0, 0, aNear), Vector3<float>(0, 0, -1)); Plane<float> farPlane(Vector3<float>(0, 0, aFar), Vector3<float>(0, 0, 1)); Plane<float> right(Vector3<float>(0, 0, 0), Vector3<float>(angle, 0, -angle)); Plane<float> left(Vector3<float>(0, 0, 0), Vector3<float>(-angle, 0, -angle)); Plane<float> up(Vector3<float>(0, 0, 0), Vector3<float>(0, angle, -angle)); Plane<float> down(Vector3<float>(0, 0, 0), Vector3<float>(0, -angle, -angle)); myVolume.AddPlane(nearPlane); myVolume.AddPlane(farPlane); myVolume.AddPlane(right); myVolume.AddPlane(left); myVolume.AddPlane(up); myVolume.AddPlane(down); When checking the intersections I am using a BoundingSphere of my models, which is calculated by taking the average position of all vertices and then choosing the furthest distance to a vertex for radius. The actual intersection test looks like this, where the "myFrustum90" is the actual frustum described above.
      The orientationInverse is the viewMatrix in this case.
      bool CFrustum::Intersects(const SFrustumCollider& aCollider) { CU::Vector4<float> position = CU::Vector4<float>(aCollider.myCenter.x, aCollider.myCenter.y, aCollider.myCenter.z, 1.f) * myOrientationInverse; return myFrustum90.Inside({ position.x, position.y, position.z }, aCollider.myRadius); } The Inside() function looks like this.
      template <typename T> bool PlaneVolume<T>::Inside(Vector3<T> aPosition, T aRadius) const { for (unsigned short i = 0; i < myPlaneList.size(); ++i) { if (myPlaneList[i].ClassifySpherePlane(aPosition, aRadius) > 0) { return false; } } return true; } And this is the ClassifySpherePlane() function. (The plane is defined as a Vector4 called myABCD, where ABC is the normal)
      template <typename T> inline int Plane<T>::ClassifySpherePlane(Vector3<T> aSpherePosition, float aSphereRadius) const { float distance = (aSpherePosition.Dot(myNormal)) - myABCD.w; // completely on the front side if (distance >= aSphereRadius) { return 1; } // completely on the backside (aka "inside") if (distance <= -aSphereRadius) { return -1; } //sphere intersects the plane return 0; }  
      Please bare in mind that this code is not optimized nor well-written by any means. I am just looking to get it working.
      The result of this culling is that the models seem to be culled a bit "too early", so that the culling is visible and the models pops away.
      How do I get the culling to work properly?
      I have tried different techniques but haven't gotten any of them to work.
      If you need more code or explanations feel free to ask for it.

      Thanks.
       
    • By evelyn4you
      hi,
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      or
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      updateSubresource(buffer_A)
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      updateSubresource(buffer_B)
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
       
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
       
       
       
       
       
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
       
  • Popular Now