Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Dec 2001
Offline Last Active Private

#5309487 Multi thread rendering engine

Posted by on 05 September 2016 - 02:06 AM

[Apparently the information here, despite it being my job, was bad... at least according to whatever drive-by downvoters metric... as I don't want to spread bad information it has been removed]

#5306258 Engine design v0.3, classes and systems. thoughts?

Posted by on 16 August 2016 - 05:49 PM

Pre-fixing class names with "C" is pointless bullshit.

#5304537 Multithreaded Game Engine Architecture With Data Oriented Design

Posted by on 07 August 2016 - 03:59 PM

Yeah, unfortunately UE4 pretty much fails at points 1 and 2 - horrible objects are horrible for cache.
(To be fair, the engine is pushing 20 years old so this isn't really a surprise, even Unity which is younger suffers from the same problem.)

How would I do it?
Nothing like how UE4 does...

#5301511 Finding that balance between optimization and legibility.

Posted by on 20 July 2016 - 05:47 AM

Thats different on consoles where there is no branch prediction (I don't think the current generation added it, did they?)

The most recent consoles from MS and Sony are both x64 based and come with all the features in a modern CPU; AVX, SSE, branch prediction, out-of-order execution and the like.

The are basically (low powered) PCs in a box.

#5299590 [Solved][D3D12] Triangle not rendering, vertex data not given to GPU

Posted by on 07 July 2016 - 07:36 AM

Honestly, if you are new to graphics programming then I would recommend NOT using D3D12 (or Vulkan).

They basically switch everything to hard mode and force you to deal with a lot of things that you really don't want to be dealing with when new.

DX11 (and to a slightly lesser extent 4.3+ GL) are much better starting platforms for getting the hang of how to do this stuff.

#5296961 Problem on referencing a vector of derived class

Posted by on 17 June 2016 - 08:55 AM

The way I've seen it done previously is to have modules declare dependencies and then have a post-init step which passes pointers around so that modules can link to each other as required.

However, that's probably not a good solution - instead a better one is probably for the phyiscs component to take a reference to the 'transform' component of the object they are both a part of. Precisely how you'd do that however depends on how things are setup in your case, but it could look something like;

class PhysicsComponent : component
Transform * targetTransform;
void setTransform(Transform * t) { targetTransform = t; };

class PhysicsModule
void AddToGameObject(GameObject *go)
TransformComponent * t = go.getComponent<Transform>();
PhysicCompnent p;

Plumbing depends on your setup and other details need to be resolved, but that's the basic idea :)

#5296727 [D3D12] How to correctly update constant buffers in different scenarios.

Posted by on 15 June 2016 - 02:52 PM

Sure sure, I was pretty much referring to what happened when command lists executed together.

Of course, technically speaking, in the example you've given the GPU itself has no concept of a 'command list'; the driver has just inserted some extra instructions to say 'do this thing', depending on the hardware in question this could be the same instruction(s) a barrier or other sync method would introduce in to the command stream.

Conceptually there is no difference, from the GPU's command processor point of view, between;

{ AAABBBCCCAABBAAC }[driver barrier]{ ABCABC }

... assuming in both cases the barriers where basically the same thing (wait for previous commands to retire and sync state type commands).

But, yeah, at this point I'm just arguing hardware semantics I feel, we are both saying basically the same thing, certainly from the user's perspective - all things in a single ECL are just an instruction stream to the gpu, you have to tell it when to pause.

#5296647 How to avoid Singletons/global variables

Posted by on 15 June 2016 - 08:21 AM

Because I'm fairly certain that you're response isn't by using a singleton yourself, but because someone else told you so.

Then, in keeping with the theme of you being wrong, you'd be incorrect about this.
I was fucking things up for myself using the Singleton pattern about 15 years ago, despite being told not to (ah, the joys of thinking you know better) and regretted it.

I've also had bugs related to Singleton usage and global access in code bases FAR above the size of your tiny example.

I've also untangled the utter garbage mess which someone created by using a Singleton in the Renderer of a game engine, resulting in code which not only was easier to follow, easier to reason about but was cleaner, easier to work with, faster and had a lower memory foot print while allowing for other optimisations.

(And, tangentially related, I also found and fixed a file system access problem on Android in the UE4 code base where someone decided we'd only ever need to access the file via one handle and only via one thread which, later, when someone changed something higher up the stack in the common platform code, caused Android builds to die in unpredictable ways due to race conditions..)

So.. yeah... I know all about the horrors of singleton usage in real, non-trivial, code bases both caused by myself and others.

#5296600 How to avoid Singletons/global variables

Posted by on 15 June 2016 - 03:47 AM

So, in order to get around 'programmer stupidity' of "forgetting to pass the pointer" you introduce something which can only go wrong when, in your words, programmer stupidity is involved?

*slow clap*

#5296482 Deferred Context Usage

Posted by on 14 June 2016 - 08:38 AM

What is it that you are doing on those contexts?
Things like map/discard and the like can cause a lot of memory to be consumed; for example, iirc if you 'map' on a deferred context the driver basically creates you a whole new copy/version of what you have just mapped and ties it to the that context; when you factor in that the driver will buffer a few frames ahead memory can be eaten very quickly.

(Also, as a side note, DX11 really doesn't scale well with threading - you'll probably want to profile but outside of a few outlier cases, such as Civ5, most people didn't see an increase. Also, last I checked, AMD's drivers didn't expose the 'we can do this properly' property so you are relying on the runtime so even less chance of a speed up. NV were/are better about this, they put a huge chunk of time in to it and their DX11 driver is simply Better, but even in that case CPU gains tended to be in the 5% range at best. In short... meh.)

#5295921 About different OpenGL Versions

Posted by on 10 June 2016 - 01:49 AM

I'm sorry... what?

Your 'don't use OpenGL' logic is because you have to 'pick a version' and then hope the user has updated drivers?
Honestly... if you can't pick a version number then you probably shouldn't be doing this job to start with... and Vulkan also requires updated and recent drivers to function... so what the hell?

I think you are SERIOUSLY underestimating the amount of work the driver is doing for you when you use OpenGL if you think a full blown game requires 'just a bit of memory management' - I've worked on consoles, I know wtf goes on behind the scenes.

(Also I never said DX was an alternative, just that vs 12, 11 still had a place, but honestly 'out side the windows world' isn't really a thing anyway. Linux continues to do fuck all with market share, OSX isn't much better (and won't have Vulkan), and Android isn't up to speed yet either and that has a massive GL|ES legacy which isn't going away any time soon, so the only viable 'battleground' right now is Windows frankly. Consoles also don't use OpenGL or Vulkan either... although one does use DX11 and 12 so...)

#5295780 About different OpenGL Versions

Posted by on 09 June 2016 - 07:40 AM

Not really.

Vulkan (and D3D12) is basically graphics development on Hard Mode.
Unless you need features of it, such as lower CPU overhead coupled with the ability to generate things across threads or a couple of GPU features, then sure go for it. However it also comes with a shit load of extra work you have to do and the trust that you know wtf you are doing - you can, and will, crash the GPU and at least to start with produce horribly performing code.

But OpenGL (and D3D11) still have their place and allow you to get a lot done without having to worry about basically writing a graphics driver and all the bullshit that goes with it.

#5294196 Do you usually prefix your classes with the letter 'C' or something e...

Posted by on 30 May 2016 - 12:13 PM

We need to class deeper..

#5294192 Do you usually prefix your classes with the letter 'C' or something e...

Posted by on 30 May 2016 - 11:52 AM

Sure sure... it's just worthless information encoded in to a name which a human is reading... and just wait until you need a 'ThingClassClass'... or when you decide it is no longer a class.. or something becomes a class...

#5294171 Do you usually prefix your classes with the letter 'C' or something e...

Posted by on 30 May 2016 - 10:09 AM

I've started suffixing (as opposed to prefixing) my classes with the word Class.

I just threw up in my brain :(