• Create Account

Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 07:49 AM

#5274397Footware at work

Posted by on 04 February 2016 - 11:06 PM

When I worked in corporate software, I got told off for wearing thongs ("flip flops"), as apparently it's a breach of workplace safety laws, and leaves them open to a lawsuit if I hurt my toes, or whatever...

In games software, it's common for people to be shoeless and not get told off

#5274363Circular clipping (Target HUD)

Posted by on 04 February 2016 - 07:14 PM

clip( -distance( texcoord, circleCenter_texcoord ) + circleRadius )

#5274218When you realize how dumb a bug is...

Posted by on 04 February 2016 - 06:06 AM

I use /Wall (except for PCH and few codes like automatic inline and padding) on MSVC2015 but it didn't warned me :\

EDIT: even the static analyser does not warn :\
Really? You should get e.g.:

1>------ Build started: Project: engine (Visual Studio 2010), Configuration: Dev Win32 ------
1>  Model.cpp
1>..\..\src\gx\Model.cpp(319): error C2220: warning treated as error - no 'object' file generated
1>..\..\src\gx\Model.cpp(319): warning C4390: ';' : empty controlled statement found; is this the intent?
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

#5274216Any portable/semi-portable way of forcing class field order in C++?

Posted by on 04 February 2016 - 05:50 AM

So far Googling has led me to the notion that it's, to quote StackOverflow, "a bit of a nightmare".

The SO ivory tower is a tad different to the real world.
I've seen this type of code, working easily, in almost every game I've worked on and make extensive use of it on my current projects. Just make it a POD struct full of public (default) members, use fixed-size types, and be aware of padding/alignment rules.
Like LS, I use static assertions on the size to catch stupid errors, and static assertions on the offsets if I'm being really careful.
On all the platforms/compilers that you care about, the padding/alignment rules will almost certainly be predictable, fairly sane, and reliable enough to write this kind of code without even having to worry about portability issues.

Biggest portability issue that I have is sometimes I have a pointer field in these structs -- or an integer 'offset' that is converted into a real pointer on load. This creates two different versions of the struct for 32/64bit platforms, which sometimes is fine if you're building the data per platform (just be sure to use a pointer-sized integer), or alternatively I have a typedef for a "cross-platform pointer sized integer", which is basically uint64_t

#5274185WVP and vertex position relationship

Posted by on 04 February 2016 - 01:06 AM

Hmmmm. Even when I post multiply the vertex vector (5,5,5,1) with the WVP matrix I get the exact same answer (as I suspected earlier).

1.66292 0       0       0
0       2.21723 0       0
5       5       16.002  1
0       0       -2.002  0


This doesn't make sense -- a vector4 (AKA 1x4 matrix, or 4x1 matrix, depending on conventions) multiplied by a matrix4x4 produces a vector4 result -- it does not produce a matrix4x4 result.

You want to be doing this, where P/V/W are Mat4x4's, and Vertex is a Mat4x1 (aka Vector4 - and Vertex.w should be 1.0):
ProjectedVertex = Projection * View * World * Vertex

Note that you can do it this way, which is three "Mat4x4 * Mat4x1" operations:
ProjectedVertex = Projection * (View * (World * Vertex))
Or you can do it this way, which is two "Mat4x4 * Mat4x4" operations, and one "Mat4x4 * Mat4x1" operation:
ProjectedVertex = ((Projection * View) * World) * Vertex
Both should produce the same result.

So... as well as your Mat4x4 Multiply(Mat4x4 a, Mat4x4 b) function, you also need a Vec4 Multiply(Mat4x4 a, Vec4 b) function.

Also, assuming there are no transforms, rotations, & scaling required. Couldn't you just multiply VP with the vertex vector?

Yes. The "world" matrix is actually the "model space to world space transform". If the vertices are already in world-space, then this matrix would be identity, which does nothing (so could be optimized out).

#5274183WVP and vertex position relationship

Posted by on 04 February 2016 - 12:32 AM

World matrix (actual vertex pos is 5,5,5

The world matrix is not a vertex...

I would have though the new x & y co-ords

You haven't transformed any vertices yet. You can't have new x & y coords if you don't even have original/input x & y coords.

After creating a WVP matrix, you can multiply the vertex position [x,y,z,1] against the WVP matrix to get [x',y',z',w']. The screen (NDC) pos is then: XNDC = x'/w', YNDC = y'/w'

#5274175When you realize how dumb a bug is...

Posted by on 03 February 2016 - 11:26 PM

You've just gained +5 development points in the programming skill-tree.

You've also suddenly realised why all the people who complain about Python's syntactically-significant whitespace are wrong ;)

And why you should enable a strict warning mode and warnings-as-errors in C++. That code should be a compile time error

#5274165Matrix multiplication question

Posted by on 03 February 2016 - 09:12 PM

From what I understand the correct multiplication order to get a WVP matrix is;

Depends on which mathematical conventions you're using.

If you're using column-major mathematical conventions, your matrices look like below and you use: Projection * View * World
$$\begin{bmatrix} Xx & Yx & Zx & Tx\\ Xy & Yy & Zy & Ty\\ Xz & Yz & Zz & Tz\\ 0 & 0 & 0 & 1 \end{bmatrix}$$

If you're using row-major mathematical conventions, your matrices look like below and you use: World * View * Projection
$$\begin{bmatrix} Xx & Xy & Xz & 0\\ Yx & Yy & Yz & 0\\ Zx & Zy & Zz & 0\\ Tx & Ty & Tz & 1 \end{bmatrix}$$

n.b. this is completely unrelated to whether you're using row-major array storage or column-major array storage.
e.g. Bullet uses column-major mathematical conventions (the top kind of matrix), but stores the values in row-major arrays. That's just an implementation detail, which should have no impact on your math.

note#2 - you'll find lots of crap on the internet saying "D3D uses row major, GL uses column major!!" but it's not true any more (ever since fixed-function graphics was replaced by shaders).

Your mathematical convention is dictated only by how you write your shader code (and how your matrix library has been written to populate translation/rotation/projection matrices...) -- e.g. do you write W*V*P or P*V*W in your shaders, and does your library fill in the values to look like the top matrix or the one below it...

Your array storage convention is controlled by keywords in GLSL/HLSL -- e.g. you can write "column_major matrix4x4 fooBar;" to choose column-major array indexing in HLSL.

To multiply this against the identity matrix do you multiply that last?

Multiplying against identity is the same as multiplying a regular number by 1 -- it does nothing, ever.
Identity * World * Identity * View * Identity * Projection * Identity == World * View * Projection

#5274150Revenue sharing, how to do it? how not to do it?

Posted by on 03 February 2016 - 07:53 PM

Frob is right that profit share doesn't [generally] work.

You forgot the qualifying word, which Frob included.

Lots of small companies have a bunch of co-founders, who are all joint-shareholders in the company, but don't necessarily have a strict revenue-sharing agreement in place. That's extremely common.

At the other end of the spectrum, lots of large companies will give all employees shares in the company as a bonus -- those shares then pay out dividends based on the company's profits... That's kind of revenue sharing, but not at all in the same way as these common (naive) "let's split all the income" type plans.

The only game I know of that's succeeded with a typical "let's split all the income" type model is Armello (see page 2)... and even then, they did it properly by actually writing up real employment/contractor's contracts, plus they've got a bunch of co-founders who are all on the board of directors and who pay themselves huge salaries that are equal to the entire rest of the staff combined... i.e. they're actually running a real business, not an expensive hobby

#5274004Basic matrix question

Posted by on 03 February 2016 - 12:00 AM

If I were to now use the x, y, and z co-ordinates for this single vertex (assuming the vertex is originally 0,0,0) would the location of the new vertex always be stored in elements 12, 13, and 14?
A vertex is a 1x4 matrix, or a 4x1 matrix (depending on which mathematical conventions you're using)

Depending on your convention, a transformation matrix multiplied by a vertex looks like:

1x4 * 4x4 = 1x4

4x4 * 4x1 = 4x1

i.e. your matrix has 16 elements, but your input vertex and your result only have 4 elements each.

#5273996[Design] Meshes and Vertex buffers

Posted by on 02 February 2016 - 09:44 PM

the shader determines the required vertex attributes <-- I'm not sure what happens when the shader tries to read an attribute which is not currently bound and set properly.

Yep. The shader determines a list of attributes that are required. The mesh gives you a list of attributes that exist.
Before you draw something, you need to resolve this issue by selecting the right VertexDeclaration/InputLayout/VAO-config. Yep, each mesh requires more than one VertexDeclaration -- you need one of them for each pair of shader-attributes and buffer-attributes.

If you can't find a valid VertexDeclaration (because the shader requires an attribute that doesn't exist), then announce loudly that there's an error in the data so that your content creators fix the data.

On the other hand, if an attribute exists, but isn't required by the shader, then it simply should not be present in the VertexDeclaration -- pick one that contains only the required attributes and leaves all others out.

some renderers (like ShadowMapRenderer) does not need any attribute except the position.

This is fine - the mesh will just use a different VertexDeclaration when it's used with this different shader.
You need a shader management system that bundles up many programs into one "effect". e.g. the Microsoft FX system allows you to create one "effect" file, which contains a forward-rendering technique and a shadow-mapping technique. Your material chooses which effect to use, and then your renderer chooses which technique to pick out of the effects -- which determines the attributes that are required, which determines the appropriate VertexDeclaration to pick for each mesh.

The optimal vertex layout will depend on which shaders it's used with. In this example, instead of:
Pos1|UV1|Nrm1|Pos2|UV2|Nrm2|Pos3|UV3|Nrm3|
you may want your buffer to be laid out like:
Pos1|Pos2|Pos3|UV1|Nrm1|UV2|Nrm2|UV3|Nrm3|
As this way the position-only shader will be more optimal, and the normal shader will still perform fairly well.

Posted by on 02 February 2016 - 07:44 PM

It's going to need to wait for the G Buffer textures to be generated each frame.  Even if I spread it out to 3 frames, I can't think of a setup where I don't end up waiting for either the default-heap copy of MVP data or the G-Buffer texture data.  It seems like that's going to be inherent in a setup where I need to render a texture as an input to another texture

There's no need to go from 2 frames to 3... You need two frames to cover up the latency of CPU->GPU communication. Generating a texture is GPU->GPU communication, and if it's done with a single command queue, then those commands are all done in serial so there's no need for synchronization, so there's no need for fences or extra buffering.

All you need is to issue a resource barrier to transition the texture from a render-target to a shader-resource.

#5273971Weather simulation

Posted by on 02 February 2016 - 06:47 PM

weather reports from various stations would have been public knowledge?
http://www1.ncdc.noaa.gov/pub/data/noaa/

http://www.ncdc.noaa.gov/cdohtml/3505doc.txt

http://www1.ncdc.noaa.gov/pub/data/ish/ish-format-document.pdf

#5273966Normalized Blinn Phong

Posted by on 02 February 2016 - 06:15 PM

This non-intuitive-ness comes from the fact that we're dealing with energy derivatives here, not real, concrete amounts of energy.

The area under the specular function cannot be greater than 1 or it will be creating extra energy out of nowhere. The function itself can be higher than 1 at individual points.

The "perfect mirror" specular BRDF is actually a delta function, which returns infinity at one single reflection direction and zero everywhere else. The area underneath that function is actually 1, even though it's infinitely tall and thin...

Calculus is a hell of a drug.

#5273959Advice for a "decoupled" game engine

Posted by on 02 February 2016 - 04:46 PM

Sounds like what you really want is a good engine with full C++ source code available so that you're not trapped by it's current implementation issues. Working at large game studios, the things you cite aren't a concern because there's no great big wall between gameplay code and engine code -- if something isn't possible, you just add the code to make it possible (whether that code goes into the engine folder or the game folder).

From the big boys, that's Unreal 4, or Stingray if you have money (or charisma to dodge the money requirement), or CryEngine if you have money/charisma and are also a masochist.
Then there's the smaller engines like C4 (soon Tombstone), or all the open source ones made out of Ogre plus Bullet plus FMOD, etc... Or you can join that latter category with your own
I would shamelessly pimp my own but it's not usable yet

Not being experienced in Unity, I thought it was based around writing gameplay code in some special snowflake C# variant?  I want to avoid being tied to magical scripting languages like Unity/C# or Unreal blueprints.

Unity uses C# v2 IIRC, or any language that runs on that version of Mono - e.g. a friend was writing a Unity game in Boo, and then ported it to Lua for CryEngine
You can always call out to external DLLs though - whether they're also .NET languages, or native code (See Mono's PInvoke)

PARTNERS