Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Online Last Active Today, 10:00 PM

#5255067 Managing input layouts

Posted by Hodgman on 01 October 2015 - 07:14 PM

Yeah at the same time that I generate that header file, I also create a HLSL file containing a dummy vertex shader function for each type of vertex input structure. I then compile all these and package them up into a binary file along with all the D3D11_INPUT_ELEMENT_DESC structures for the game to use at runtime.

I was a bit worried about the optimizer, so the dummy code casts every input attribute to a float4, adds them all together, and returns the sum as SV_POSITION.
I obviously never use these vertex shaders, except as the validation argument that's required when making an input-layout...

IMHO, that's the only terrible part of D3D input layouts. By doing this, I'm telling D3D "trust me, I'll be careful to only use this with matching vertex shaders", which is 100% allowed... So, I should be able to make that same promise by passing a null pointer for the VS validation argument :(

#5254995 DirectX12's terrible documenation -- some questions

Posted by Hodgman on 01 October 2015 - 09:43 AM

Also on the above -- GL SSBO != D3D structured buffer.
GL SSBO ~= D3D UAV(C++) / RW******(HLSL)
i.e. StructuredBuffer is a SRV (Texture Buffer in GL), but RWStructuredBuffer is a UAV (SSBO in GL).

Though it's still a bit unclear to me when one would pick a CBV over a SRV as SRVs can reference buffers which presumably can be accessed in shaders. And if they can be accessed in shaders then it must be in a similar manner?

CBV is designed for cases when every pixel/etc will need to consume all of the data present in the buffer.
SRV is designed for cases when each pixel/etc will consume different parts of the data present in the buffer.
* Constant access for all threads -> constant buffer.
* Random access -> buffer.


As mentioned in the link above, AMD (usually) doesn't actually perform any optimizations based on these assumptions any more -- they're both just buffers (except in a few rare cases under D3D11, where small cbuffers can be optimized as magic uniform data). However, I think Nvidia still does perform some extra optimizations when it knows that a buffer contains uniforms/constants.

Also mentioned in that link -- the descriptor sizes for all the different buffer/texture types can be quite different.

#5254916 Fail to Set Some Pixel Shader Parameters

Posted by Hodgman on 30 September 2015 - 08:44 PM

Where/how does it crash if you don't do that?

SetVector fails.
It crashes when you call SetVector? That's kinda important. What's the crash message? What's the arguments to the function? Why is it crashing? Are you using bad pointers on that line?

#5254721 Reflections with Fresnel

Posted by Hodgman on 29 September 2015 - 09:58 PM

AFAIK You need to perform the fresnel calculations on the microfacet normals, not the macroscopic surface normal.
For rough surfaces, there's a lot of variation from the macro-normal, so it becomes impossible for the (integrated) fresnel function to reach 100%.

You can either try to pre-integrate that function and store it in a lookup table, or just use a cheap approximation such as:
hack = lerp(1,0.7,roughness)
F = F0 + pow(dot(H,V),5)*(1-F0)*hack

#5254709 Shaders

Posted by Hodgman on 29 September 2015 - 07:58 PM

I emulate GL here - I create my own object that is a pair of pixel/vertex shaders (or a tuple of pixel/vertex/geometry shaders, etc...).
I then sort using the IDs of my "pair" objects.

#5254559 What exactly is API-First?

Posted by Hodgman on 29 September 2015 - 03:44 AM

You know, I worked for a principal engineer who advocated very strongly for a cleanly defined API up front.

The difference was that we were working as multiple teams, and if you didn't define the APIs between teams before hand, nobody got any bloody work done. In the dependent-team case, you literally don't have a choice about defining the API (or network protocol) ahead of time.

I imagine it's the same with a web-service like gmail, with many layers of APIs between email storage and email UI. Seeing as they're always rewriting everything, those APIs may as well be seen as the up-front constraints on the next rewrite :)

Even in my job, I define a robust graphics API, and then while the games team builds game-rendering features above it, I'm busy working below it, writing implementations for 5 back-ends :D

While a good discussion could be had in design methodologies, and the role of strong specifications, and of layering, and team designation... I can't help but keep returning to the fact that the OP's article is a terrible, strong-arm sales pitch aimed at non-technical managers of tech teams, trying to trick them into being hyped over a nonsense manufactured trend, which the author intends to profit from. :lol:

#5254498 What exactly is API-First?

Posted by Hodgman on 28 September 2015 - 04:26 PM

Chet Kapoor is the chief executive of Apigee and previously served as vice president of content management and search products at IBM and VP/general manager for the integration group of BEA Systems.

So, it's written by a guy who's built a career in sounding like he's managing tech people... And who now manages a business that claims to "power the most API programs" with it's "API management" platform... a business whose website spruiks "APIs for dummys".

And we're suprised that he's writing authoritative-sounding drivel about how APIs are somehow a new thing?... Which probably sounds very persuasive to other pretending-to-be-tech-manager VP types, who'll force their actual tech underlings to write their new APIs on top of Mr Kapoor's platform? Smart.

I'm going to start asking people if their businesses use API programs, because they really should be! How ever did we get by before we could write API programs!?

#5254495 Hunting Game

Posted by Hodgman on 28 September 2015 - 04:14 PM

Any takers???

What are we taking? You didn't post your idea.

#5254487 Tool to measure memory fragmentation

Posted by Hodgman on 28 September 2015 - 03:42 PM

If you want this quickly, then just buy elephant and goldfish :)
I haven't used it, but I very nearly bought it when starting my current game project.

All the companies that I've worked for have had in-house allocators, tracking and visualisation tools, similar to the above. It's not that hard to implement yourself, but it takes a lot of testing in anything allocation-related to have confidence you've not created subtle but dangerous bugs ;)

I think you need to analyse the memory usage first.
Then, based on the results of the memory fragmentation analysis you can decide whether you need to use a custom allocator or not. Does that make sense?

You need a custom allocator to implement tracking/logging though :lol:
To begin with you can just do the logging and pass-through to malloc/etc instead of implementing an full allocator yourself.

If you start to come close to exhausting your memory space (e.g. 2GB or 3GB on a 32-bit program) and need another large allocation then those are important, but for most PC programmers it hasn't been an issue for two decades.

I wish! Lots of companies are still making the switch from x86 to 64bit even now :( My previous employer shipped their first 64bit PC game this year... And only because they were forced to by the address space limit. Regular 32bit x86 address space is really tight for modern games these days.

#5254331 Singleton and template Error (C++)

Posted by Hodgman on 27 September 2015 - 11:50 PM

Down that path lies passing 60 parameters to constructors and grotesque contortions just to avoid using a global here and there. All of your code because harder to read, harder to understand, and harder to maintain.

You're comparing using global variables, to... writing terrible, terrible code....
How about you don't use globals, and *gasp*, also don't write terrible code? tongue.png

Your OS syscalls are, for all intents and purposes, services/singletons.

And they provide a service to the EXE as a whole / their mutable state is owned by the EXE as a whole, so they don't have a choice but to be global (to the exe).

Development services such as a logger can argued to fit into the same category, so can get away with using global state.
Almost everything in a game code-base, does not fit into that category... and no, the alternative is not grotesque contortions and passing 60 parameters everywhere...

I am seriously tired of dealing with the cognitive overhead of dependency injection frameworks

I've also seen some pretty terrible global service locator frameworks. The problem there is bloated frameworks and bad "architects" tongue.png

#5254328 Any way to make shaders all share the same functions?

Posted by Hodgman on 27 September 2015 - 11:40 PM

Use #include's, like in C.

Unfortunately, GLSL is pants-on-head where you have to pass a char-array of source code to the driver at runtime... which means you've got to implement code organization features like #include yourself.


I'd recommend using an existing preprocessor implementation, such as mcpp... but if all you need is #include, it's pretty easy to implement on your own.


While you're at it, you should make a little GLSL-compiler tool of your own, which preprocesses your source code into a single output file, and also validates that your code is correct using the reference compiler.


If you ever do a D3D port, you'll want to have a similar pre-processor/pre-compiler setup. HLSL does support #include out of the box, but it also supports pre-compilation. So instead of using the glsl-reference-compiler, you'd use the actual hlsl-compiler (fxc.exe) to precompile your (many) source files for each shader into a single binary precompiled shader file.

#5254300 Managing input layouts

Posted by Hodgman on 27 September 2015 - 08:53 PM

I have a config file where I describe vertex structures (the inputs to vertex shaders), and stream formats (the format that vertex attributes are stored in memory).

In the same config file, I then have a list of which vertex structures will be used with which stream formats.

I can then use that config file to build D3D11 input layouts, D3D9 vertex declarations, GL VAO configs, etc...


Instead of using reflection on my vertex shaders, I just force them to declare which vertex-format from the config file they're using. I actually use a tool to convert the config file into a HLSL header file that contains these vertex structures.


When importing a model, I can see which shaders it's using, which vertex formats those shaders use, and then I can generate a list of possible stream-formats that will be compatible with those shaders. The model importer can then pick the most suitable stream-format from that list, convert the Collada/etc data to that stream format, and then record the names of the appropriate InputLayouts to use at runtime.

#5254241 Use openGL in game engines

Posted by Hodgman on 27 September 2015 - 06:32 AM

"Addons" for games are usually called mods. Game usually have to be designed to support mods. Most mobile games are not designed to support mods... so the details for how to make one will vary greatly on a case by case basis. You'll have to reverse engineer / hack apart each game individually to find the best way to modify and extend it.

#5254240 binary alpha

Posted by Hodgman on 27 September 2015 - 06:18 AM

clip( finalColor.w - 0.5 ); // discard this pixel if alpha is <50%

#5254219 Should I learn c++ for game dev?

Posted by Hodgman on 27 September 2015 - 01:19 AM

As well as the above (getting rid of C++ niceties), it's a completely different paradigm than C++.

C is generally used in a procedural paradigm, whereas C++ is generally used in an OO paradigm. Programmers should also learn JavaScript to appreciate the prototype paradigm and a functional language too.

e.g. lots of people struggle to write large, manageable, flexible shader codebases in HLSL/GLSL because they're not well practiced in the procedural sytle.
C++ is multi-paradigm, so there'll be times where you'll need to write very C-style code in C++.

Back to what braindigitalis was saying though, it can be used to demonstrate a deep understanding of C++, due to it's simplicity. e.g. It's one thing to be able to use virtual function calls in C++, but it's another to be able to demonstrate how they'd be implemented in C code. If an interview candidate can past that test, then I've got a whole lot more trust that they truely understand what they're doing :)
Likewise, at the last company I worked for, they write all their game code in Lua. However, they require all the gameplay programmers to know C++, as it ensures that they'll truely understand the impact of the Lua code that they write.

For a super-technical position -- e.g. programmer in charge of the game engine, I'd probably want them to be able to read assembly code, not just C :)