Jump to content

  • Log In with Google      Sign In   
  • Create Account

Radikalizm

Member Since 05 May 2011
Online Last Active Today, 02:56 AM

Posts I've Made

In Topic: Directx 11, 11.1, 11.2 Or Directx 12

Yesterday, 12:42 PM

but day to day DX12 coding isn't going to be any different than DX11 speedwise or anything else.

But if you already mastered DX11, DX12 shouldn't be that much different.
 

 

DX12 actually is quite different. Knowing DX11 is pretty much a requirement for starting off with 12 as certain concepts carry over, but all the handy higher level tools are stripped away so you have more fine-grained control.

 

One area I always like to bring up is resource binding; in 11 it's simply a matter of binding the shaders you need and calling ID3D11DeviceContext::XSSetShaderResources/SetUnorderedAccessViews/SetConstantBuffers/SetSamplers/etc, and you're good to go.

 

In DX12 it becomes a lot more complicated. First of all you start off with constructing a root signature, which raises the question of how you want to do root signature layouts. Want to do direct root parameters for constant buffers and structured buffers? Want to set up descriptor tables? Do you want constant directly embedded into the root signature? Static samplers? How many parameters can you fit into your root signature before it kicks into slow memory? What are the recommendations for the hardware architecture you're trying to target (hint: They can differ quite drastically)? How do you bundle your descriptor tables in such a way that it adheres to the resource binding tier you're targeting? How fine-grained is your root signature going to be? Are you creating a handful of large root signatures as a catch-all solution, or are you going with small specialized root signatures?

 

There's no general best practice here which applies to all cases, so you're going to want answers to those questions above. 

 

Once you have a root signature you get to choose how to deal with descriptor heaps. How are you dealing with descriptor allocation? How do you deal with descriptors which have different lifetimes (e.g. single frame vs multiple frames)? Are you going to use CPU-side staging before copying to a GPU descriptor heap?  What's your strategy for potentially carrying across bound resources when your root signature or PSO changes (if you even want this feature at all)?

 

Again, these questions will need answers before you can continue on. It's easy enough to find a tutorial somewhere and copy-paste code which does this for you, but then what's the point of using DX12 in the first place? If you need cookie-cutter solutions, then stick with DX11. No need to shoot yourself in the foot by using an API which is much more complex than what your application requires.

 

Have a look at this playlist to see how deep the root signature and resource binding rabbit hole can go.

 

 

This kind of stuff pretty much applies to every single aspect of DX12. Things which you could take for granted in 11 become very serious problems in 12. Things you didn't have to worry about like resource lifetime, explicit CPU-GPU synchronization, virtual memory management, resource state transitions, resource operation barriers, pipeline state pre-building, and a lot more become serious issues you really can't ignore.

 

If you're shipping an application, why go through the trouble of having to deal with all of this stuff when you know that an API like DX11 will suffice? As far as I'm aware, DX11.3 has feature parity with the highest available DX12 feature level, so it's not like you're missing out on any specific features, aside from potentially having more explicit control over multithreaded rendering (which is a massive can of worms in itself).

 

DirectX 12 is not something you need to use to write modern graphics applications. It's something you use when you know up front that you'll get some real gains out of it.


In Topic: What Makes A Game Look Realistic?

21 July 2016 - 06:05 PM

Yup, an accurate lighting system will be huge towards achieving realism. In addition to that you'll want your artists to be experienced with these types of physically based lighting systems so you can't create "impossible" materials or lighting setups.

 

Offline rendering methods such as path tracers already can achieve photorealism, but the techniques used there are way too expensive to apply in a real-time context.


In Topic: dx11 shader reflection need advice

13 July 2016 - 10:02 PM

@Rad sorry for the downvote, I meant to press up vote. Tablet glitched out on me.

 

Don't worry about it :)


In Topic: dx11 shader reflection need advice

12 July 2016 - 12:27 PM

Do you explicitly need reflection for what you're trying to achieve? I often times find it much easier to just declare C++ structures for the constant buffers I'm going to require, and just create instances of those which I can bind directly. You completely avoid having to dynamically construct a bunch of intermediate buffers and such for your shader to use. This especially holds true for constant buffers which you know you'll need 99% of the time, such as engine constants or per-view constants.

 

If you do need to have a fully dynamic setup for constant buffer binding it might be a good idea to just store some form of descriptor structure for your constant buffer which can provide your application with the info it needs to build a buffer which can be sent to the GPU. Think of it as a simple schema for your constant buffers. In that case you can just build these buffers where you need them in your application and then write to them simply by mapping them and copying over the chunk of memory which holds your data, which I assume is sort of like what you described.

 

I'm not a huge fan of the fully dynamic approach though. I can see shader reflection being used in tools where you're setting up material structures and stuff like that, but I'm not a huge fan of using it at runtime. To each his/her own though :)


In Topic: Should I leave Unity?

11 July 2016 - 10:53 PM

Unity's profiler shows allocations, not garbage.

 

This is correct. OP, are you maybe interpreting allocated memory as "garbage"?

 

However if you're calling Rebuild often you could potentially be creating unreferenced List objects which will in fact be picked up by the garbage collector at a later point in time. If this does in fact get called frequently you might want to consider setting a maximum limit on your List sizes and allocate that up front instead of constantly reallocating potentially large chunks of data. In Unity these upper limits should be easy to determine as I believe it will only allow for 16-bit indices, giving you a maximum of 64k vertices per mesh.

 

Also be aware that you're completely able to trigger garbage collection by yourself if you really are in a position where it becomes a genuine problem (which probably isn't the case just yet). This way you can at least control when it happens instead of introducing random stalls.


PARTNERS