• Advertisement
Sign in to follow this  

DX11 DirectX 11 is out of the bag in a few weeks

This topic is 3467 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So what are everyone's thoughts on that? Sounds interesting. They are going to talk about it at the XNA summit in a few weeks and Nvidia is going to talk about it in August. Supposedly Nvidia is skipping DX10.1 and going to DX11...

Share this post


Link to post
Share on other sites
Advertisement
Whats the point of directX 11? games hardly even use dX 10.

Get to hardware raytracing already!

Share this post


Link to post
Share on other sites
Quote:
Original post by Viperrr
Whats the point of directX 11? games hardly even use dX 10.

Get to hardware raytracing already!


DX11 might be Raytracing.

Share this post


Link to post
Share on other sites
Any Major Changes?

What will break in recompiling DX10 code for DX11?

Which Graphics cards won't support it?

Share this post


Link to post
Share on other sites
Quote:
Original post by Eeyore
Quote:
Original post by Viperrr
Whats the point of directX 11? games hardly even use dX 10.

Get to hardware raytracing already!


DX11 might be Raytracing.


110% sure that it won't be...but that's just me.

Share this post


Link to post
Share on other sites
Quote:
Original post by vs322
Quote:
Original post by Eeyore
Quote:
Original post by Viperrr
Whats the point of directX 11? games hardly even use dX 10.

Get to hardware raytracing already!


DX11 might be Raytracing.


110% sure that it won't be...but that's just me.


agree, raytracing no way

Share this post


Link to post
Share on other sites
Quote:
Original post by Eeyore
Quote:
Original post by Viperrr
Whats the point of directX 11? games hardly even use dX 10.

Get to hardware raytracing already!


DX11 might be Raytracing.


Sounds like someone's been buying into the Intel PR. [wink]

Rest assured, nobody's making Nvidia and ATI abandon years of research and technology dedicated to rasterization.

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
Rest assured, nobody's making Nvidia and ATI abandon years of research and technology dedicated to rasterization.


Perhaps Promit might [wink]

Share this post


Link to post
Share on other sites
So the proverbial cat is out it's proverbial bag [grin]

I'm still waiting on the final details, but a few of the DirectX MVP's who've been involved with some of the early D3D11 work will likely be at the GameFest London event on August 6th. May well see you there!

Not sure if there is anyt MVPs will be at the USA and Japan events though.

Naturally, no one who knows anything is allowed to say anything publicly but I suppose it wouldn't really be breaking any rules to say it'll be worth paying attention to...


Cheers,
Jack

Share this post


Link to post
Share on other sites
Yeah i find it strange how there are relatively few games around that are DX10 compatible. It kinda makes me tempted to support DX9 as well as 10 in the project i'm working on. Are Microsoft adopting a new strategy of having more frequent new revisions of the API, only DX9 was around for a comparatively long time. DX10 & 10.1 feel like a stop gap.

Share this post


Link to post
Share on other sites
Ray tracing? Not specifically but I expect more features making GPGPU easier which means ray tracing if you want to.

I can see a push towards adative tesselation / displacement at the vertex level.

Share this post


Link to post
Share on other sites
Quote:
more features making GPGPU easier
"Compute Shader" maybe?
Quote:
I can see a push towards adative tesselation / displacement at the vertex level
"new programmable and fixed function stages designed to enable powerful, flexible tessellation" maybe?

More in my journal [wink]

Quote:
Are Microsoft adopting a new strategy of having more frequent new revisions of the API, only DX9 was around for a comparatively long time. DX10 & 10.1 feel like a stop gap.
Yes, they made this assertion a few years ago!

It's all part of the fixed-caps and granular feature levels. D3D9 was around for about 5 years before 10 hit the floor, but counting D3D9Ex the API will effectively have another 3-5 years at least... And really, D3D9 has been 5 iterations - fixed function (aka D3D7 compatability), Shader-1.x (aka D3D8 compatability), Shader-2.x, Shader-3.x and then D3D9Ex (WDDM behaviour). I'm sure everyone here is familiar with how much of a mess that ended up [smile]

ISTR it was at PDC'05 where SamG answered an audience question by describing more regular (e.g. annual) API changes. Not necessarily 10,11,12,13 but at least 10.1, 10.2, 10.3, 11, 11.1 etc... This being a much saner and more manageable way of handling the rapid progress of computer graphics than the old style caps system.

Quote:
Yeah i find it strange how there are relatively few games around that are DX10 compatible
Its not really that surprising to me. A lot of companies will have a mature and well understood technology platform built on D3D9-era technology. They'll have incrementally improved it over the years, but it is one advantage of the old style caps system.

Ripping all that out and really taking advanage of a new API like D3D10/10.1/11 is a huge amount of work. I'd suspect that most studios going forwards will be adopting these into any new technology they develop or buy in.

Given the product lifecycle of years you should start to see more of them appearing around now and a lot more over the next year or so as those products started when Vista+D3D10 were released are now coming around to completion and RTM.


hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
Yes, they made this assertion a few years ago!


I feel so out of touch Mr. Hoxley.

Share this post


Link to post
Share on other sites
Quote:
Original post by Dave
Yeah i find it strange how there are relatively few games around that are DX10 compatible. It kinda makes me tempted to support DX9 as well as 10 in the project i'm working on. Are Microsoft adopting a new strategy of having more frequent new revisions of the API, only DX9 was around for a comparatively long time. DX10 & 10.1 feel like a stop gap.


I have to agree. It feels like they're getting a bit ahead of themselves, since it's not like DX10 was such a smashing success that everyone adapted it instantly (let alone made it run decent or worked with IHV's to get actually working drivers). I'm not in the know, but I got the idea the current approach is still to do D3D9 and emulate that in D3D10, tacking on an extra feature here or there. Demirug also commented on that a while back (here), saying that the problem with D3D10 is that everyone's using it as an emulation to D3D9 concepts. Seems like something to tackle before claiming acceptance and moving on.


Oh and:

Quote:
"new programmable and fixed function stages designed to enable powerful, flexible tessellation"


Fixed function?

Share this post


Link to post
Share on other sites
Quote:
Original post by Dave
Quote:
Original post by jollyjeffers
Yes, they made this assertion a few years ago!


I feel so out of touch Mr. Hoxley.
[lol] Just me geeking out and watching all the video casts of the developer conferences [cool]


Not to go off on too much of a tangent, but the whole mentality around Windows Vista can't of done the adoption of D3D10 much help. Taken in isolation, I doubt you'll find any seasoned D3D9 developer who doesn't like the new API - either from a coding point of view or features and capability.


Quote:
I got the idea the current approach is still to do D3D9 and emulate that in D3D10, tacking on an extra feature here or there
Yup, would imagine that is quite likely.

Software technology in general has this strange bipolar nature of moving very fast yet moving extremely slowly. Companies release new libraries and tools all the time but customers and developers 'like what they know and know what they like'.

Also its rare that a developer will get a clean slate to work with - you'll almost always have some sort of legacy code/content/process/design to integrate with which often limits your abilities to truly utilize all the new bells and whistles of a shiny new technology.

Moving from early shader/fixed-function hybrid solutions to 'pure' shader driven architectures with D3D9+SM3 to a simple D3D9-in-D3D10 to a 'proper' D3D10 solution seems like a realistic progression. Even when the technology is all migrated over to being D3D10 its possible that some of the old content creation tools, test suites, media etc..etc.. will still be in old formats that may hinder the engines ability to feed shiny new algorithms and show off cool "new" features.


It'll therefore be interesting to see what shape the D3D11 API takes - we've seen that 10.0 and 10.1 are very similar, but how will migrating or multi-targetting work with D3D11??


Quote:
Oh and:
Quote:
"new programmable and fixed function stages designed to enable powerful, flexible tessellation"

Fixed function?
ID3D11Device->MakeMyGraphicsSilkySmooth(TRUE); ?

Jack

Share this post


Link to post
Share on other sites
Being Vista only meant DX10 was never going to be a smashing success. IMO slow adoption is due to the number of XP only customers out there rather than any difficulty porting to DX10.

What I'm interested to know about compute shaders is how well they fit with using the GPU for traditional rendering, I hope compute shaders and their resources are fully pipelinable with conventional rendering. (And resources fully interchangeable) I see no reason why this wouldn't be the case, crossed fingers.

Personally at the moment I'm not a big fan of moving computation from the CPU to the GPU, not until GPUs have more fillrate than artists can possibly spend. (Yeah oneday? but not yet) The HD revolution is with us and fillrate is still at a premium. Sure GPGPU is very interesting and embracing it is the future so yes, I'm glad to see more support.

I don't see multiple CPUs being a replacement to the GPU, raytracing or not. I think raytracing will merely complement rasterization technology. Raytracing on the CPU or the GPU? That will be interesting to watch, both are capable but I think GPUs will always be more naturally suited.

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers

ID3D11Device->MakeMyGraphicsSilkySmooth(TRUE); ?


Mind your NDA [wink]

Quote:
It'll therefore be interesting to see what shape the D3D11 API takes - we've seen that 10.0 and 10.1 are very similar, but how will migrating or multi-targetting work with D3D11??


That's the million dollar question. If D3D11 is too dissimilar from 10, they'll have created three (3!) platforms for developers to support on Windows. I'm not qualified to comment on the quality of the new APIs and I'm sure the new features are worth it, but it doesn't strike me as a good strategy to push out that many (perhaps significantly) distinct APIs to develop on the Windows platform alone. Not to make a drama, but it could very well hurt PC gaming.

I completely agree with the "like what they know" observation and things will of course improve. But D3D10 has been around for what, 2 years now (~1 year if you count HW support and install base)? If they keep at it like this, at some point even the most progressive developer might conclude it's probably easier and certainly more profitable to port to consoles than to keep up with all the D3D versions.

Share this post


Link to post
Share on other sites
If it is true, then DX is yet another step ahead of GL it seems.

The reason that most games does not utilize the full potential of DX10 is that developers like to target their games at XP, Vista, PlayStation, Wii and Xbox. Only one of those platforms has DX10. So I wouldn't expect that to change anytime soon.

Share this post


Link to post
Share on other sites
Quote:
Original post by pismakron
If it is true, then DX is yet another step ahead of GL it seems.


Depends what you mean by 'ahead'. All things considered, the IHVs must now be looking at supporting 3 versions of DX and 3 versions of GL. Only 1 of the GL versions actually exists, and only 1 of the DX versions has a significant installed user base (I can't see that changing dramatically for another 2 years at least).

Essentially no matter what you do, there are a huge number of people with DX10 capable cards who can only use DX9-era features. Investing time and money in cutting-edge PC graphics is a worse investment at the moment than it has been for a long time.

To me, this talk of DX11 confuses the issue even more. I can't see it damaging PC gaming especially, but it may damage sales of top-end video cards for a couple of years. In the meantime, I'm cultivating a healthy disinterest in the bleeding edge of PC graphics.

Share this post


Link to post
Share on other sites
I have been coding both OpenGL and DirectX for a few year now; and I can only say that programmable tesselation, Computational Shader + polymorphic shader + dynamic linkage ...
hmm does that sound like something the OpenGL ARB board members would ever give us?... nah
Thumbs up for DirectX!

Share this post


Link to post
Share on other sites
Quote:
Original post by ruysch
I have been coding both OpenGL and DirectX for a few year now; and I can only say that programmable tesselation, Computational Shader + polymorphic shader + dynamic linkage ...
hmm does that sound like something the OpenGL ARB board members would ever give us?... nah
Thumbs up for DirectX!


/signed

DirectX gives us a standard, I really dislike vendor specific extensions. (Although hardware peeps have found ways to do this in DirectX, r2vb anyone?)

The problem is that 3 versions of DirectX on the go at once seems too many, wait for people to migrate from 9-10, otherwise the market will get very confused.

Share this post


Link to post
Share on other sites
Quote:
Original post by Martin
I think raytracing will merely complement rasterization technology.


Yeah that's where the smart money is. Despite what some might have you believe, ray-tracing isn't end-all be-all holy grail of computer graphics. It's quite simply just a different way of getting pixels on the screen, and one that has different performance characteristics from rasterization. For some things, like reflections, shooting off a couple of rays make sense. For casting primary rays...you can beat straight-up rasterization.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement