So, Direct3D 11.2 is coming :O

Started by
47 comments, last by Adam Miles 10 years, 9 months ago

What's New in Direct3D 11.2 http://channel9.msdn.com/Events/Build/2013/3-062

Any hope, wish-list? cool.png

other related sessions:

EDIT:

slides and videos are up:

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/
Advertisement
My biggest wish would be getting this stuff down stream to Win7 at least - as much as I like using Win8 at home the fact is Win7 is a massive market still and I don't see the Win8.1 update changing that any time soon.

Given they seem to be providing hardware partially resident textures in this update they better otherwise it'll be just a useless bullet point for most developers :|

nothings exciting here... http://msdn.microsoft.com/en-us/library/dn312084(v=vs.85).aspx and dxgi 1.3 http://msdn.microsoft.com/en-us/library/dn280344(v=vs.85).aspx

if AMD or NVIDIA ask to buy a new GPU for this, they can suck my balls..

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

It appears that the only new GPU feature is tiled resources, which sounds like an interface for PRT without the shader side of things. The rest of it is just shader compiler stuff.

Yeah, the Tile Resource is indeed a PRT (or PRBuffer I guess) interface - I've had a quick poke around the HLSL docs and there is a 'load' function on the buffer object which returns a 'status' and I don't recall seeing that before (and it's marked as not final) so it could be that the docs are not up to date yet for things like texture2D etc in this regard and they should have a load(...) function too.

There should be no reason this requires anything newer than a current DX11 card; both have the ability to express this under the hood unlike the DX11.1 changes where, afaik, the NV device couldn't do some of the required features (where as the AMD7970 series could.)

The Shader stuff is potentially interesting but the PRT stuff is, imo, the bigger issue - but if it doesn't end up on Win7... *sigh*

Most of changes of D3D11.1 were some useless and old dx9 formats (with cap bits), a shader tracing API (they could provide it with a simple remapping their own proprietary tracing apis), 3d stereo (no need to changes, AMD has HD3D, nvidia has 3D Vision Surround), UAVs on all shader stages (and openGL has a similar feature...).. the rest is quite a big rename of interface, structs and functions.. the most cool and exiting feature was in fact UAVs on all shader stages, and no-one provides it to the "old" dx11 cards (AMD still lack of full opengl 4.3 support too, but not nvidia).

the funny is that half of AMD WDDM 1.3 cards are just simple, "old" and rebranded DX11 GPUS (mostly VLIW4, but there are VLIW5 cards too as I read in the ini file of the last leaked drivers)...

Of course Windows 7 is dead with vista in MS plans... And they kill PIX too...

let's see what happens, but I don't feel confident..

edit: the funniest is that Microsoft still doesn't provides some "cool" APIs and tools that Chuck Walbourn post around the web (not only DXTK and DXTex, but SSE3/4AVX/FMA extension support to DXMath, SHM library, updated BC6HBC7Encoder etc..).. the new Windows SDK lacks of everything of that..

"Recursion is the first step towards madness." - "Skegg?ld, Skálm?ld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

Direct3D 11.2? Only available on Windows 8.1 or Windows Blue no doubt.

Cool. Thanks Microsoft. I guess.

As far as a wish list... Perhaps more for D3D12 but I'd like to see:

Input assembler moved completely into the vertex shader. You bind resources of pretty much any type to the vertex shader, access them directly via texture look-ups. Would make things a lot simpler and more flexible IMHO. Granted you sort-of can do this already, but I'd be nice if the GPUs/drivers were optimized for it.

Depth/stencil/blend stage moved completely into the pixel shader. Sort of like UAVs but not necessarily with the ability to do 'scatter' operations. Could be exposed by allowing 'SV_Target0', 'SV_Target1' ect... to be read and write. So initially its loaded with the value of the target, and it can be read, compared, operated on, and then if necessary written.

Full support for double precision through the whole pipeline. Including 64-bit formats.

Unify textures and buffers. They are already inter-changable in many ways. Call them textures, arrays, buffers, resources, blobs, whatever. Make it a 4D structured block of data that can be used for input or output all throughout the pipeline. Where necessary a few creation flags in order to improve performance. And all resources/buffers/whatever are 4D. Remove resource dimension limits (ie. make them 32 or 64 bit unsigned ints), if there's memory available I should be able to create it.

Sampler states removed and rolled into the shaders. Replace them with a few HLSL intrinsics. Again this can be done already but with HLSL intrinsics supporting it, it shouldn't incur any performance penalty.

Not that any of this'll be in there, but one can always hope ;) Bottom line I generally dislike state and fixed function mess, rolling these things into shaders gives a lot of additional flexibility, while making things simpler in general.

Input assembler moved completely into the vertex shader. You bind resources of pretty much any type to the vertex shader, access them directly via texture look-ups. Would make things a lot simpler and more flexible IMHO. Granted you sort-of can do this already, but I'd be nice if the GPUs/drivers were optimized for it.

GPU's already work this way. The driver generates a small bit of shader code that runs before the vertex shader (AMD calls it a fetch shader), and all it does is load data out of the vertex buffer and dump it into registers. If you did it all yourself in the vertex shader there's not really reason for it to be any slower.

Depth/stencil/blend stage moved completely into the pixel shader. Sort of like UAVs but not necessarily with the ability to do 'scatter' operations. Could be exposed by allowing 'SV_Target0', 'SV_Target1' ect... to be read and write. So initially its loaded with the value of the target, and it can be read, compared, operated on, and then if necessary written.

Programmable blending isn't happening without completely changing the way desktop GPU's handle pixel shader writes. TBDR's can do it since they work with an on-chip cache, but they can't really do arbitrary numbers of render targets.

Doing depth/stencil in the pixel shader deprives you of a major optimization opportunity. It would be like always writing to SV_Depth.

I'm interested in the low-latency stuff. I heard about it some time ago, I was thinking it was just a rumor. I wonder if it'll turn out to actually be useful. Preferably soon.

Previously "Krohm"

This topic is closed to new replies.

Advertisement