Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 05 May 2011
Offline Last Active Today, 06:37 PM

#5316111 Questions about Physical based shading(PBS)?

Posted by on 21 October 2016 - 01:27 PM

The PBR book I linked above basically starts out by saying that PBR is ray tracing1.


No, they don't. pbrt is the name of the framework they walk you through in the book. You can find the accompanying source code here: https://github.com/mmp/pbrt-v3


The book walks you through the implementation of a ray tracing framework which uses physically based shading models. This doesn't mean that physically based shading is exclusive to ray tracing. 


As mentioned before, physically based rendering is just an approach to shading and lighting with a foundation in the actual physics behind lighting and reflections, rather than working backwards from "what looks right" like we used to do in previous generations.

#5306253 Ecs Architecture Efficiency

Posted by on 16 August 2016 - 05:12 PM

I'm looking at the ability to render as many entities as possible to the screen before it drops the frame rate below 60 FPS.


Honestly, you're going to run into a ton of actual rendering-related bottlenecks before a decently architected game object/entity/"whatever buzzword you want to use" system gets in your way. Don't start theoretically optimizing things. Solve the problems you have at hand to get to the desired frame time for your specific game.




Spread that 16ms out to every system???  Are you trying to run every system at 60FPS?  That's wasteful and pointless....


You've obviously never worked on any type of shooter before. Try telling that to people designing any type of competitive game which supports split-second inputs. 16ms in a game update tick can definitely make a difference.




Not to mention there's the option to run your rendering on a seperate thread asynchronously, which most games don't even do anymore.



What are you talking about? There's a ton of games doing async rendering? The era of single-threaded update and render is over!




Still looking for more improvements that will make this even more efficient.  But my question is actually really simple, how fast are your engines running and what specific design patterns are you using to improve how many simultaneous entities can exist and be drawn without slowing your engine down.  I'm trying to find a goal to shoot for and new ideas to improve my own engine.


This is still a very useless comparison to make. You're acting like every engine is comparable in some shape or form when it comes to performance. They're not. Focus on what your requirements and goals are. Do some profiling, find your bottlenecks, fix them. Rinse and repeat. 

#5303016 Directx 11 For Small Scale And Directx 12 For Large Scale.

Posted by on 28 July 2016 - 05:37 PM

Just something to think about, you could basically make dx11 using dx12. Dx11 can almost be looked at as a wrapper for dx12


Microsoft actually did this as a porting aid for bringing DirectX 11 applications to DirectX 12; it was called DirectX 11on12. We briefly evaluated it as a tool for bringing a title over to DirectX 12 when it was still in its EAP stage. I don't know if it was kept up-to-date or not; to be honest with you it doesn't seem like the right way to approach a DirectX 12 port in retrospect.


I think the name dx12 is a little misleading as others have said, its not really bringing anything new to the table, but rather giving you much more control over the graphics hardware


Completely agreed, it seems to be creating a lot of confusion especially in hobbyist scenes where people now feel the need to move to this new API thinking that DirectX 11 is deprecated and outdated. It's really really not! I can't stress this enough.




I think this is also a problem with very low level software technologies like these being exposed to the gaming community and them being seen as must-haves to be competitive in the gaming market, whether your game will benefit from them or not. DirectX 12 and Vulkan are the new edgy buzzwords which gaming enthusiasts can use to judge and compare games. There is this idea that having your game run on DirectX 12 will automatically make it faster and more graphically impressive, because 12 is a larger number than 11 and therefore it must be better.


I have lived and breathed DirectX 12 for a good year now working on bringing an existing DirectX 11 engine over to DirectX 12 (because of reasons), and it has been an incredibly challenging endeavor. Only since a short little while have we been able to actually get more out of 12 than we could get out of 11 within the architecture that was in place, and that was with a team of talented and experienced engineers.


I am all for people broadening their horizons and teaching themselves how to work with this genuinely exciting new tech, but for the love of all that is good, if you're actually trying to ship something on your own tech within a reasonable time frame and without an experienced team working on this full time you're just so much better off sticking with 11. Building a 12-based engine just takes up so many of your engineering resources while architecting, implementing and debugging your engine (and debugging in 12 can be ruthless) that it's just not worth the trouble if you don't have a very good reason to go for 12 in the first place.


AAA developers who know they can bring GPU drivers to their knees in DirectX 11 or companies doing heavy GPU simulations with lots of data throughput know up front they can benefit from 12, so it makes sense for them to use it if it helps them in the long run. It's exactly these companies who reached out to hardware vendors and companies like Microsoft to state that they would be interested in such an API, eventually resulting in the development of Mantle and subsequently DX12 and Vulkan. As an indie developer or hobbyist it seems very unlikely to me that you'd ever need or benefit from something like 12.



I know I can sound like a grump and a broken record by writing all of these DirectX 12 posts, and I also know that there's plenty of people who think I'm wrong and who don't want to hear this stuff, but I just really want to make the point that you don't have to use 12, and that it's usually a better idea to stick with 11 if you want to actually build cool and exciting graphics techniques. If there's anything I like to see from a community like this it's people building cool new crazy exciting graphics stuff, and I feel like 11 will get you there much faster than 12 will.



#5302515 Directx 11, 11.1, 11.2 Or Directx 12

Posted by on 25 July 2016 - 12:42 PM

but day to day DX12 coding isn't going to be any different than DX11 speedwise or anything else.

But if you already mastered DX11, DX12 shouldn't be that much different.


DX12 actually is quite different. Knowing DX11 is pretty much a requirement for starting off with 12 as certain concepts carry over, but all the handy higher level tools are stripped away so you have more fine-grained control.


One area I always like to bring up is resource binding; in 11 it's simply a matter of binding the shaders you need and calling ID3D11DeviceContext::XSSetShaderResources/SetUnorderedAccessViews/SetConstantBuffers/SetSamplers/etc, and you're good to go.


In DX12 it becomes a lot more complicated. First of all you start off with constructing a root signature, which raises the question of how you want to do root signature layouts. Want to do direct root parameters for constant buffers and structured buffers? Want to set up descriptor tables? Do you want constant directly embedded into the root signature? Static samplers? How many parameters can you fit into your root signature before it kicks into slow memory? What are the recommendations for the hardware architecture you're trying to target (hint: They can differ quite drastically)? How do you bundle your descriptor tables in such a way that it adheres to the resource binding tier you're targeting? How fine-grained is your root signature going to be? Are you creating a handful of large root signatures as a catch-all solution, or are you going with small specialized root signatures?


There's no general best practice here which applies to all cases, so you're going to want answers to those questions above. 


Once you have a root signature you get to choose how to deal with descriptor heaps. How are you dealing with descriptor allocation? How do you deal with descriptors which have different lifetimes (e.g. single frame vs multiple frames)? Are you going to use CPU-side staging before copying to a GPU descriptor heap?  What's your strategy for potentially carrying across bound resources when your root signature or PSO changes (if you even want this feature at all)?


Again, these questions will need answers before you can continue on. It's easy enough to find a tutorial somewhere and copy-paste code which does this for you, but then what's the point of using DX12 in the first place? If you need cookie-cutter solutions, then stick with DX11. No need to shoot yourself in the foot by using an API which is much more complex than what your application requires.


Have a look at this playlist to see how deep the root signature and resource binding rabbit hole can go.



This kind of stuff pretty much applies to every single aspect of DX12. Things which you could take for granted in 11 become very serious problems in 12. Things you didn't have to worry about like resource lifetime, explicit CPU-GPU synchronization, virtual memory management, resource state transitions, resource operation barriers, pipeline state pre-building, and a lot more become serious issues you really can't ignore.


If you're shipping an application, why go through the trouble of having to deal with all of this stuff when you know that an API like DX11 will suffice? As far as I'm aware, DX11.3 has feature parity with the highest available DX12 feature level, so it's not like you're missing out on any specific features, aside from potentially having more explicit control over multithreaded rendering (which is a massive can of worms in itself).


DirectX 12 is not something you need to use to write modern graphics applications. It's something you use when you know up front that you'll get some real gains out of it.

#5301845 What Makes A Game Look Realistic?

Posted by on 21 July 2016 - 06:05 PM

Yup, an accurate lighting system will be huge towards achieving realism. In addition to that you'll want your artists to be experienced with these types of physically based lighting systems so you can't create "impossible" materials or lighting setups.


Offline rendering methods such as path tracers already can achieve photorealism, but the techniques used there are way too expensive to apply in a real-time context.

#5300415 dx11 shader reflection need advice

Posted by on 12 July 2016 - 12:27 PM

Do you explicitly need reflection for what you're trying to achieve? I often times find it much easier to just declare C++ structures for the constant buffers I'm going to require, and just create instances of those which I can bind directly. You completely avoid having to dynamically construct a bunch of intermediate buffers and such for your shader to use. This especially holds true for constant buffers which you know you'll need 99% of the time, such as engine constants or per-view constants.


If you do need to have a fully dynamic setup for constant buffer binding it might be a good idea to just store some form of descriptor structure for your constant buffer which can provide your application with the info it needs to build a buffer which can be sent to the GPU. Think of it as a simple schema for your constant buffers. In that case you can just build these buffers where you need them in your application and then write to them simply by mapping them and copying over the chunk of memory which holds your data, which I assume is sort of like what you described.


I'm not a huge fan of the fully dynamic approach though. I can see shader reflection being used in tools where you're setting up material structures and stuff like that, but I'm not a huge fan of using it at runtime. To each his/her own though :)

#5300287 Should I leave Unity?

Posted by on 11 July 2016 - 07:59 PM

You'll have to provide some more details then.

What does "it seems to always create garbage" mean specifically? It's invoking the garbage collector? It's allocating a large amount of memory? Do you have any profiling data or other statistics to show what's happening?


Also, if you really suspect the garbage collector, are you sure your pooling strategy is correct and not doing any hidden redundant allocations? Is the step where you're passing along data to your meshes doing any redundant copies?

#5300283 Should I leave Unity?

Posted by on 11 July 2016 - 07:48 PM

Seeing as you're talking about the garbage collector a lot, it seems like this is more of a problem of learning how to deal with large amounts of objects in a managed environment rather than a problem with Unity.


Object pooling will alleviate a lot of situations where you're dealing with lots of objects of the same type being created. In your case if you're dealing with chunks, it might be easier to pre-allocate a large batch of chunks off of which you can allocate and recycle chunks. In games I've shipped with Unity we had to apply this type of strategy a lot, for example for projectiles, effects and even pathfinding nodes if I remember correctly (I didn't write the pathfinding system).


Proper object management is not a problem that's unique to Unity, C# or even managed languages in general. In C++ you're going to have to reason about data flow and usage just as much if not even more, so rolling your own engine is not going to fix this problem.


Plenty of programmers like to blame garbage collection schemes for many of their problems, but there's often good solutions to avoid garbage collection issues which will eventually also benefit your code in terms of efficiency and cleanliness.

#5300051 Implement baked AO-maps

Posted by on 10 July 2016 - 05:23 PM

Add all the lighting in your shader from all lights and then multiply that value by the AO texture. Or you can simply take the scenes ambient lighting and multiply it by the AO map.



The second option is the only correct option when it comes to AO. Contrary to what you sometimes see in games, AO should only be applied to your ambient/indirect lighting term as it is a form of shadowing generated by ambient lighting, not direct lighting. Applying it to your direct lighting term will result in those overly dark ugly halos around objects, which you often see with SSAO-like implementations.




I now take the sampled color from the ao-map multiplied by the ambient light, then ambient result + diffuse color (the resulting color after normal calcuation etc)


This is correct :)

#5299725 [Solved][D3D12] Triangle not rendering, vertex data not given to GPU

Posted by on 07 July 2016 - 09:02 PM

I've actually already programmed with opengl, it's just that I didn't think of when I was typing that, because it has already been a few months since I did that.


 And the reason I chose to learn some D3D12, is because I like to make it challenging to myself, also, I have lot of free time now, so I'm taking it slow, school only starts in the last week of september.


Seriously, take phantom's advice. Start with DX11 and then move to 12 once you're completely comfortable with it. DX12 absolutely expects you to be very experienced with DX11, and then cranks up the difficulty dial ridiculously high. It's not a beginner subject.


I understand wanting a challenge, but there's just so many absolutely unforgiving details about 12 which you just can't know going into this blind. Just look at the amount of similar threads created here from people who have trouble getting the most basic elemental things like a triangle or rectangle on screen. It's completely insane.


Please, do yourself a favor, start with DX11 so you don't have to deal with so many ridiculously complex moving parts which you really don't want to deal with if all you want to do is doing actual graphics work. At this point you're not going to be able to beat the driver in terms of efficiency, which is what DX12 is all about. Even more so, you're going to do way worse than what the driver can do in DX11, leading to horrible performance. This is not a thing about "if I try hard enough I'll succeed", you're really not going to get anything out of DX12 at this point.

#5298672 Draw rectangle Directx 12

Posted by on 30 June 2016 - 05:41 PM

I feel like there should be more of an emphasis in general on the fact that DirectX 12 and Vulkan are not APIs you want to dive into unless you actually have a need for them, or if you're planning on specifically refining your graphics engine development skills.


Starting with D3D12/Vulkan without any exposure to previous APIs just sets you up for a very very bad time with very bad results in the end (as mentioned above). Building a fully featured D3D12 engine which can outperform an established D3D11 engine is a complex task even for seasoned engineers.

#5292173 Useful to learn fluid mechanics?

Posted by on 17 May 2016 - 05:11 PM

If you have an interest for it, then do it!


Various forms of fluid simulation are used in current gen games, from ocean rendering to particle based fluid simulations. I did some investigation into particle based fluid simulation on the GPU at my workplace as an investigation in making an over the top crazy looking blood splatter simulation, which was lots of fun.

#5289297 Question about Open World Survival Game Engines

Posted by on 29 April 2016 - 01:32 PM

A relatively experienced engineer will easily be able to adapt to any tool, especially if it's a well documented tool like Unity, Unreal or CryEngine. The tool should be chosen for the benefit of the project first, previous experience with the tool comes second.


Focus on what you know and do best, leave the technical decisions to the people who have the technical background.

#5289272 Question about Open World Survival Game Engines

Posted by on 29 April 2016 - 11:26 AM

You're going about this completely backwards. Normally seeing you gather a team of engineers based on your pitch (and proper compensation of course), and you let them decide what technology to use for the project, since they'll be able to make a much more educated decision than you ever will.

#5275043 Diffuse IBL - Importance Sampling vs Spherical Harmonics

Posted by on 09 February 2016 - 02:03 PM

So I'm correct in assuming that you're using the same approach as your specular where you're sampling off of a fairly high resolution HDR cubemaps for your diffuse results?

If so, don't bother, because as you've figured out for yourself that will require a massive amount of samples to get working nicely as you're sampling the entire hemisphere rather than a focused area as you would with glossy specular materials.


What you could do here is filter and downsample a seperate HDR cubemap offline just like how you would approach filtering your specular mip levels for high roughness values, but just for your diffuse term. You can use a similar importance sampling approach to do this. This gives you a fairly tiny texture which should give very acceptable results with only a minimal amount of samples required. At this point you should consider just doing an SH representation though as that will give pretty much the exact same results (as you're dealing with very low frequency data anyway) with a lower memory footprint, but if you're not familiar with that just yet you can experiment with just the separate diffuse texture.


Whether you use an SH approach or a texture-based one will not make a huge change in the end result as we're dealing with very low frequency data here anyway. Spherical harmonics are just another way to represent your data, they're not a different way of generating that data. You'll still need to sample your diffuse data somehow before you can store it.


Additionally you won't need any type of LUT as your diffuse term is usually solely dependent on your normal vector, unless you're doing some expensive fancy diffuse model which is view-dependent or which takes surface roughness into account. Even if that would be the case, this usually results down to taking your fresnel factor into account somewhere which can be accounted for after sampling your diffuse IBL.


(PS: Dag Kevin, lang geleden!)