Sign in to follow this  
Followers 0
Alessio1989

So, Direct3D 11.2 is coming :O

48 posts in this topic

What's New in Direct3D 11.2 http://channel9.msdn.com/Events/Build/2013/3-062

 

Any hope, wish-list? cool.png

 

other related sessions:

EDIT:

 

slides and videos are up:

 

Edited by Alessio1989
3

Share this post


Link to post
Share on other sites
My biggest wish would be getting this stuff down stream to Win7 at least - as much as I like using Win8 at home the fact is Win7 is a massive market still and I don't see the Win8.1 update changing that any time soon.

Given they seem to be providing hardware partially resident textures in this update they better otherwise it'll be just a useless bullet point for most developers :|
2

Share this post


Link to post
Share on other sites

It appears that the only new GPU feature is tiled resources, which sounds like an interface for PRT without the shader side of things. The rest of it is just shader compiler stuff.

0

Share this post


Link to post
Share on other sites
Yeah, the Tile Resource is indeed a PRT (or PRBuffer I guess) interface - I've had a quick poke around the HLSL docs and there is a 'load' function on the buffer object which returns a 'status' and I don't recall seeing that before (and it's marked as not final) so it could be that the docs are not up to date yet for things like texture2D etc in this regard and they should have a load(...) function too.

There should be no reason this requires anything newer than a current DX11 card; both have the ability to express this under the hood unlike the DX11.1 changes where, afaik, the NV device couldn't do some of the required features (where as the AMD7970 series could.)

The Shader stuff is potentially interesting but the PRT stuff is, imo, the bigger issue - but if it doesn't end up on Win7... *sigh*
0

Share this post


Link to post
Share on other sites

Most of changes of D3D11.1 were some useless and old dx9 formats (with cap bits), a shader tracing API (they could provide it with a simple remapping their own proprietary tracing apis), 3d stereo (no need to changes, AMD has HD3D, nvidia has 3D Vision Surround), UAVs on all shader stages (and openGL has a similar feature...).. the rest is quite a big rename of interface, structs and functions.. the most cool and exiting feature was in fact UAVs on all shader stages, and no-one provides it to the "old" dx11 cards (AMD still lack of full opengl 4.3 support too, but not nvidia).

 

the funny is that half of AMD WDDM 1.3 cards are just simple, "old" and rebranded DX11 GPUS (mostly VLIW4, but there are VLIW5 cards too as I read in the ini file of the last leaked drivers)...

 

Of course Windows 7 is dead with vista in MS plans... And they kill PIX too...

 

let's see what happens, but I don't feel confident..

 

edit: the funniest is that Microsoft still doesn't provides some "cool" APIs and tools that Chuck Walbourn post around the web (not only DXTK and DXTex, but SSE3/4AVX/FMA extension support to DXMath, SHM library, updated BC6HBC7Encoder etc..).. the new Windows SDK lacks of everything of that..

Edited by Alessio1989
0

Share this post


Link to post
Share on other sites

Direct3D 11.2? Only available on Windows 8.1 or Windows Blue no doubt.

 

Cool. Thanks Microsoft. I guess.

0

Share this post


Link to post
Share on other sites

As far as a wish list...  Perhaps more for D3D12 but I'd like to see:
 
Input assembler moved completely into the vertex shader.  You bind resources of pretty much any type to the vertex shader, access them directly via texture look-ups.  Would make things a lot simpler and more flexible IMHO.  Granted you sort-of can do this already, but I'd be nice if the GPUs/drivers were optimized for it.
 
Depth/stencil/blend stage moved completely into the pixel shader.  Sort of like UAVs but not necessarily with the ability to do 'scatter' operations.  Could be exposed by allowing 'SV_Target0', 'SV_Target1' ect... to be read and write.  So initially its loaded with the value of the target, and it can be read, compared, operated on, and then if necessary written.
 
Full support for double precision through the whole pipeline.  Including 64-bit formats.
 
Unify textures and buffers.  They are already inter-changable in many ways.  Call them textures, arrays, buffers, resources, blobs, whatever.  Make it a 4D structured block of data that can be used for input or output all throughout the pipeline.  Where necessary a few creation flags in order to improve performance.  And all resources/buffers/whatever are 4D.  Remove resource dimension limits (ie. make them 32 or 64 bit unsigned ints), if there's memory available I should be able to create it.
 
Sampler states removed and rolled into the shaders.  Replace them with a few HLSL intrinsics.  Again this can be done already but with HLSL intrinsics supporting it, it shouldn't incur any performance penalty.
 
Not that any of this'll be in there, but one can always hope ;)  Bottom line I generally dislike state and fixed function mess, rolling these things into shaders gives a lot of additional flexibility, while making things simpler in general.

Edited by Ryan_001
0

Share this post


Link to post
Share on other sites

I'm interested in the low-latency stuff. I heard about it some time ago, I was thinking it was just a rumor. I wonder if it'll turn out to actually be useful. Preferably soon.

0

Share this post


Link to post
Share on other sites

Oh, this is interesting:  I installed the preview of VS 2013, and in the new windows kits 8.1 I cannot find a new feature level.

 

Actually the feature level defined in d3dcommon.h are the same of d3d11.1... So, there is no D3D_FEATURE_LEVEL_11_2 ?

 

What will invent AMD and NVIDIA to sell new cards now? xD Maybe, maybe, maybe,... this time we can just have some new feature for free rolleyes.gif

Edited by Alessio1989
0

Share this post


Link to post
Share on other sites

What will invent AMD and NVIDIA to sell new cards now? xD Maybe, maybe, maybe,... this time we can just have some new feature for free rolleyes.gif


Clearly you missed the part where AMD's GCN DX11 GPUs supported DX11.1 from the start with a driver update so lets not go whining about things which aren't true, eh?

If something isn't physically supported in hardware then having to get new hardware to support it makes sense...
0

Share this post


Link to post
Share on other sites

 

What will invent AMD and NVIDIA to sell new cards now? xD Maybe, maybe, maybe,... this time we can just have some new feature for free rolleyes.gif


Clearly you missed the part where AMD's GCN DX11 GPUs supported DX11.1 from the start with a driver update so lets not go whining about things which aren't true, eh?

If something isn't physically supported in hardware then having to get new hardware to support it makes sense...

 

 

what feature were "physically unsupported"?

 

Old dx9/ feature_level_9_x caps bit, format and minimum precision? nope.

3d stereo? nope

UAVs on all shader stage? nope, OpenGL (since version 4.1 if I'm correct) has a similar feature...

the rest is all about shader tracing for debugging purpose, an updated WARP software renderer etc...

 

I asked many times to AMD what kinda feature where physically unsupported by "old dx11" cards, they never answer.

 

So please, tell me what VLIW5 and VLIW4 cards cannot support of the feature_level_11_1, because no-one knows that.

On the other hand NVIDIA says that "UAVs in all shader stages is not a "gaming" features, so don't cray for the lack of that"...

 

The funniest is that in all WDDM 1.3 drivers of AMD leaked and released in this week, only the last two series (HD 7xxx and HD 8xxx) are WDDM 1.3, but only half of them officially support feature_level_11_1, since a lot of them (all HD 76xx, 75xx, 74xx, 85xx, 84xx,and integrated 86xx cards), are all WLIV4 cards without feature_level_11_1 support.

 

I only see one reason: marketing.

Edited by Alessio1989
0

Share this post


Link to post
Share on other sites
OK, so having watched the videos.

Tiled Resources are indeed PRT support and it there for Buffers and Texture2D(Array) objects just the docs aren't up to date.

From what I can gather the new features can be queried as part of existing feature levels, thus no D3D_11_2 feature level - just need the correct device interface to query for the new stuff where applicable. The also made a point that Tiled Resources are on hardware out there 'today', demoed something on an NV device and drivers willing I suspect it'll be on AMD's GCN arch. based GPUs. (not older as the hardware physically couldn't support it.)

No one in the videos asked if this would end up on Win7 so I put on my 'bother MS employee' hat and fired off an e-mail earlier - I'll post as/when I have info.
0

Share this post


Link to post
Share on other sites
Yes, but the Tile Resources stuff doesn't require D3D11.1; it should work with D3D11 devices IF they support the caps.

Not supporting the D3D11.1 feature level isn't the same as not support the D3D11.1 API - you can have a D3D10 device and still use the D3D11.1 API with it, just not any D3D10.1+ features.
0

Share this post


Link to post
Share on other sites

If anyone is interested in specific nvidia support for wddm 1.3 and the optional features such as tiling, I did some googling and testing today.

 

It appears 700 series definitely support wddm 1.3 and at least texture tiling, as all three of the tiling demos shown at Build were ran on them - http://blogs.nvidia.com/blog/2013/06/26/higher-fidelity-graphics-with-less-memory-at-microsoft-build/

 

I also personally tested texture tiling and hardware overlays on a 500 series (560ti) over lunch, and at least with the current forceware 326.01 beta for win 8.1 preview the 500 series is unsupported.  The driver presents it as a WDDM 1.2 device.  I left a post over at nvidia's developer forum seeking clarification - https://devtalk.nvidia.com/default/topic/548766/directx-and-direct-compute/wddm-1-3-win-8-1-support-besides-geforce-700-series/

 

If anyone else is interested in checking out the current state of feature support for specific devices, it's actually relatively quick to do (<1 hour, excluding iso download time).  Download the windows 8.1 preview and the visual studio 2013 preview from microsoft.com, then install 8.1 onto a spare partition and vs2013 in 8.1.  Grab the latest drivers for your card (NV via windows update, or AMD from their site - http://support.amd.com/us/kbarticles/Pages/AMDCatalystWIN8-1PreviewDriver.asp ).

 

There are feature specific sample apps available here - http://msdn.microsoft.com/en-us/library/windows/apps/bg182880.aspx#three

 

In my case I only quickly checked out the Tiled Resources and the Foreground Swap Chains (aka hardware overlays) samples.  Worth noting you can't just run them to verify support, since they both have fallback methods when support isn't present in hardware, though you'll be sure to notice the resource tiling sample falls back to rendering on a warp device!  I didn't note the line, but you'll want to check the tiling sample using a breakpoint at "m_tiledResourcesTier = featureData.TiledResourcesTier; " in DeviceResources.cpp, m_tiledResourcesTier will reveal the support for resource tiling.  Similarly for the foreground swap chain sample you'll want to breakpoint "m_overlaySupportExists = dxgiOutput2->SupportsOverlays() ? true : false; " in DeviceResources.cpp.

 

I'm particularly curious about 600 series nvidia cards since i don't have ready access to one, and my guess is, right now anyway, nvidia's support for wddm 1.3 and optional features is only on kepler based GPUs.

4

Share this post


Link to post
Share on other sites

same thing about  AMD drivers: I have a cayman GPU (VLIW4 arch.) and is still 1.2 WDDM. http://support.amd.com/us/kbarticles/Pages/AMDCatalystWIN8-1PreviewDriver.aspx

 

Note that not all HD7xx0 and HD8xx0 cards are based on the CGN arch, half of them are still WLIV4 cards (this means with only feature_level_11_0 support).

In particular all the "low-range" cards are VLIW4 GPUs: all the HD76xx and below, dedicated and IGPs, and all HD85xx and below, plus HD86xx IGPs, are all VLIW4 cards.

 

I should ask some of my friends if they have a VLIW4 card with WDDM 1.3, I need some guinea pigs dry.png ..

 

 

EDIT:

 

slides and videos are up:

 

Edited by Alessio1989
1

Share this post


Link to post
Share on other sites
I had the presentations playing in the background while I worked this morning, the What's New in Direct3D 11.2  and  Massive Virtual Textures for Games: Direct3D Tiled Resources both have some decent-ish information on the new api features.  DirectX Graphics Debugging Tools is also good in places, with some solid info on the new command list annotations and other improvements to the usability of VS's graphics diagnostic tools (a lot of the rest of it retreads existing debugging features though).
 
In particular it's worth checking out slide 40 of What's New in Direct3D 11.2, where it summarises most but not all of the new features in 11.2 in terms of hardware support.  Essentially it boils down to:
 
Hardware Overlay = driver dependent for levels 9_1, 10_0, and 11_0.
Runtime Shader Linking = guaranteed at all levels >= 9_1.
Low Latency Presentation API = guaranteed at all levels >= 9_1.
Mappable Default Buffers  = driver dependent and then only for 11_0.
Tiled Resources = driver dependent and then only for 11_0.
 
I also booted back into 8.1 Preview and had a closer look at the samples this evening.  Mappable Default Buffers currently aren't supported either with forceware 326.01 on my 500 series GPU.  You can check it in the Tiled Resources sample i mentioned before, at the same breakpoint, since MapOnDefaultBuffers is part of the same featureData struct as the TiledResourcesTier value.  
 
Incidentally there appears to be two distinct tiers of tiled resource support for devices (check out d3d11.h in the 8.1 SDK headers, line 7378), so perhaps it isn't so grim for non-GCN non-kepler GPUs:
        D3D11_TILED_RESOURCES_NOT_SUPPORTED	= 0,
        D3D11_TILED_RESOURCES_TIER_1	= 1,
        D3D11_TILED_RESOURCES_TIER_2	= 2
Of course it could just be that tier_2 corresponds to as-yet-unreleased hardware, I'm just speculating until we get better documentation or newer drivers.  That said, I'm hopeful some form of wddm 1.3 and tiled resources come to VLIW4 and fermi based GPUs, since it would make it more practical to invest the time in learning to use the new APIs.  Looking back on past beta drivers for microsoft OS previews, it's possible, since I remember wddm 1.2 drivers were initially only for the 400 series (spring '12), but by the time windows 8 arrived they had wddm 1.2 drivers for the 200 series also.
 
I guess we'll just have to wait and see, I personally don't know or understand enough about the hardware details, hopefully the vendors become more specific than "supported by 90 million shipped GPUs" soon.  smile.png Edited by backstep
1

Share this post


Link to post
Share on other sites

On my GTX 670 WDDM 1.3 and tiled resources seem to be supported. However I can't get the tiled resource sample (http://code.msdn.microsoft.com/Direct3D-Tiled-Resources-80ee7a6e) to work.

I already posted the following in the Q&A section of the sample page:

 

 

 

I have a Nvidia GTX 670 and use the latest 326.01 driver. DirectX says that TIER1 resource tiling is supported.
When I execute the sample the first error occurs at TerrainRenderer.cpp:102

// Create a wrapping trilinear sampler with max-filter behavior.
samplerDesc.Filter = D3D11_FILTER_MAXIMUM_MIN_MAG_MIP_LINEAR;
DX::ThrowIfFailed(device->CreateSamplerState(&samplerDesc, &m_maxFilterSampler));

If I comment out "samplerDesc.Filter = D3D11_FILTER_MAXIMUM_MIN_MAG_MIP_LINEAR;" then the error doesn't occur.

The second problem occurs in ResidencyManager.cpp:700

device->GetResourceTiling(
texture,
&resource->totalTiles,
&resource->packedMipDesc,
&resource->tileShape,
&subresourceTilings,
0,
resource->subresourceTilings.data()
);

All entries in the resource->subresourceTilings array have WidthInTiles = HeightInTiles = DepthInTiles = 0. Therefore subsequent calls that allocate textures fail because of a size of 0 (WidthInTiles * HeightInTiles == 0).

Can anybody help me?

 

 

Has anybody fixed this or is it a driver problem?

Edited by xdopamine
0

Share this post


Link to post
Share on other sites

Sample code needed to be submitted a few weeks before BUILD to run through our content reviewers, etc.  My team didn't have a fully functional Tier 1 driver at that time so it only runs against Tier 2.  There will be an update posted in the next few days that supports Tier 1 as well.  If you can't wait the WARP software rasterizer (Tier 2) runs the Mars demo quite well on a good desktop/laptop processor.

 

Max McMullen

Direct3D Development Lead

Microsoft

 

Thank you for the info. Let's hope that most of DX 11.0 cards will be tier1 capable :o

0

Share this post


Link to post
Share on other sites


Unify textures and buffers. They are already inter-changable in many ways. Call them textures, arrays, buffers, resources, blobs, whatever. Make it a 4D structured block of data that can be used for input or output all throughout the pipeline. Where necessary a few creation flags in order to improve performance. And all resources/buffers/whatever are 4D. Remove resource dimension limits (ie. make them 32 or 64 bit unsigned ints), if there's memory available I should be able to create it.
This sounds like a good thing at first, but it actually ignores the hardware texture filtering capabilities if you treat everything the same.  Texture data is accessed differently than buffer data, so there are logically different objects to represent them.

 

Even so, you could approximate some of this functionality with byte address buffers already, couldn't you?

0

Share this post


Link to post
Share on other sites

The sentiment here seems fairly negative about the 11.2 features, but I was actually rather happy with the content considering that this is only a point release...  The hardware  overlay/compositing API seems like a pretty cool thing (although it shouldn't be optional...) and offers several levels of usage for different scenarios.  Shader linking seems like a good idea in concept as an answer to the uber shader, although I would have to try it out for a final judgement.  Mappable default buffers seems like a great addition to reduce memory usage/bandwidth and reduce the number of API calls needed.

 

Low latency presentation is also a welcome update for applications that require quick response to the user's input to be reflected in the rendered output.  And of course, the tiled resources provide some pretty good functionality directly in the API.  That is free functionality that is non-trivial to develop yourself, which is pretty good in my book.

 

On top of those, there are updates to the runtime behind the scenes for performance improvements.  Why the negative vibe on this release???  It seems pretty solid to me.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0