Jump to content

  • Log In with Google      Sign In   
  • Create Account


Meltac

Member Since 29 Mar 2012
Offline Last Active Mar 24 2014 02:08 AM
-----

Posts I've Made

In Topic: Direct X 11 really worth it?

20 January 2014 - 02:40 AM

Thanks guys for pointing out the "big" features of DX11 / D3D11.
 

 


Re Dx11 killer features: UAVs, resource views, compute.


 


Compute shaders is the big one. Combines with the ability for arbitrary read access from buffers and textures, it opens to door to all kinds of new techniques. We haven't even really scratched the surface of what's possible with compute. Other notable mentions:

Access to MSAA data
Access to depth buffers (possible in D3D9 through driver hacks)
Better instancing support (you can access arbitrary buffers now in vertex shaders, which makes instancing much more powerful)
Constant buffers (reduces CPU overhead)
Up to 128 textures
Decoupled textures and sampler states
Integer math in shaders


It's a shame that none of those things is possible to use solely within a HLSL shader - one needs an engine dedicated to DirectX 11 to make use of these features. This is what many shader dev's like me keeps "stuck" on DX9, as gasto says correctly:
 

 


That dilemma is what keeps developers(like me) from actually creating the engine. When I would have finished writing the engine in DirectX11, DirectX12 will be the cutting edge.

In Topic: Direct X 11 really worth it?

17 January 2014 - 03:50 AM


http://msdn.microsoft.com/en-us/library/windows/desktop/ff476876(v=vs.85).aspx

 

Thanks, but I can google that myself cool.png

 

I meant, from a practical point of view, which of the newly supported features are mostly used or useful in DX11 when it comes to game development? What are the key features you would say that makes it worth porting your legacy DX9 code up to DX11?

 

Or, say it yet another way, what high-level features (those advertised by the game industry) "need" DX11 if you want to bring them in your own game when?


In Topic: Direct X 11 really worth it?

17 January 2014 - 02:33 AM


Purely in terms of being a graphics API, I would say D3D11 is unquestionably better than D3D9. It's cleaner, leaner, and let's make use of newer hardware capabilities. Will that actually matter for your game? It depends on what kinds of game you're making, but in general the graphics API isn't going to become a limiting factor until you start having a really sophisticated level of graphics tech. If you want bleeding-edge graphics, then yes you want D3D11. If you want sprites or rudimentary 3D, the API isn't going to be that important.

 

Hi, may I ask which hardware capabilities supported by D3D11 (besides Tesselation) are the most important for modern 3D games in your opinion? Or, put it another way, what supported hardware features does a modern 3D game benefit most from when using D3D11 compared to D3D9?


In Topic: [DX9] Execute pixel shader branch on every N-th frame

15 January 2014 - 04:19 AM


Alternating on the level of pixels (either even/odd line or like on a checkerboard) doesn't sound so bad, did you give it a try?

 

Yes I did, works technically flawless, as would be expected. However it makes the image looking entirely pixelated, rough and unpolished because resolution is quartered, so I'd need at least some sort of gaussian blur to make it look "smooth" again what would performance make drop drastically and therefor is not an option here (especially because it would result in the opposite of what I am intending, to decrease GPU load).

 

So I think, besides all downsides, best option might be using the timer approach and try to find some well balanced frequency setting for toggling between the partial effects, so to make sure both of them shall be executed equally. I could still provide a fallback using the mentioned checkerboard approach for users with slow frame rates where the timer appraoch wouldn't look good.

 

Thanks guys, I appreciate your help.


In Topic: [DX9] Execute pixel shader branch on every N-th frame

13 January 2014 - 09:05 AM

Thanks, guys. I was afraid of that. And no, I don't think that I'd have any option to stuff a frame counter into an alpha channel or something like that.

 


A question - do you need it to work on alternating frames? I mean, do you strictly need it (for something like some 3D glasses that use this principle to distinguish between images for left and right eye)?

 

Yes and no. I have more than only one application where I'd need something like this. The mentioned 3D glasses support is indeed one of them, but that's not yet urgent. For now, I've thought of something else:

 

I've got a couple of post-process effects that are quite a bit expensive in terms of GPU load, mainly things were I sample / measure / compute several aspects of the same effect sequentially and then merge / mix the partial results into one final image, so I thought instead doing this all together in a single shader pass (I don't have multiple passes anyway, BTW), I could sort of "split" these calculations and spread them over two or more frames. The potential flickering caused by toggling between two (or more) effects would be acceptable, in some cases even desired (think of a, for example, simulated night vision effect).

 

The idea of using frames for that purpose is just to make sure that the different parts of the effect would be executed in alternating order, thus to avoid that one part would be executed much more often than the other what would cause weird visual lagging.

 

So, if using frames is not a possible or suitable way of doing such a "split", what other options do I have (besides the most obvious, distinguish between even and off pixels)?


PARTNERS