Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 09:39 AM

#5300357 Should I leave Unity?

Posted by Hodgman on 12 July 2016 - 05:14 AM

 

In C# use structs over classes where possible
 

What's the reason for this being a win?

My experience is mainly with C++, so this sounds very foreign.

 

C++ lets any type be passed by reference or by value, or to be heap allocated or stack allocated.

C# instead has "reference types" (classes) and "value types" (structs, primitives, etc). Reference types are always heap allocated, passed by reference (pointer), garbage collected, etc. Value types are passed by value by default.




#5300352 Is it ok for a component to expect a gameobject to have certain properties?

Posted by Hodgman on 12 July 2016 - 04:49 AM

Personally this is why I think component-based systems are a bit of a fad - they change problems but don't really fix them. By all means prefer composition where possible, but not every game logic problem is suited to it.

I share the same sentiment but different reasoning.

It's generally true that composition is preferable to inheritance.

Many OOP programmers, upon having this epiphany, make the declaration that "OO is bad" (even though this is a key teaching of OO, so these people haven't actually learned enough OO in the first place to be able to disown it) and then commit the over-engineering sin of building a "composition framework" to assist them in using their new silver bullet... Which ends up being as inflexible and ill-conceived as their original inheritance anti-designs.

You can use components and composition and game objects, without there ever being a class called "component" or a class called "game object" because those kinds of frameworks are unnecessarily reinventing existing language features as less-capable library features, and/or implementing design idioms (which are ways that you structure your code - meta code) as a library of actual code. Most "component frameworks" are, frankly, bullshit.


#5300322 Copyright

Posted by Hodgman on 12 July 2016 - 01:47 AM

Yes, that's bad. If your movie is based on their game (same setting, characters, plot, visual designs, etc) then your movie is a "derivative work" and you need permission.




#5300288 Should I leave Unity?

Posted by Hodgman on 11 July 2016 - 08:03 PM

The garbage collector is killing me because of the amount of objects my game has

Having a lot of objects should not a problem for a generational garbage collector anyway; deleting a lot of objects is a problem.
 
But a quick google search says that Unity doesn't use a generational garbage collector :(
I've shipped a lot of games using Lua, which also has a pretty terrible (non-generational) garbage collector, where the cost grows with the number of live objects. There's always ways to work around it :(
Firstly, avoid ever generating any data (deleting objects), as this creates work for even the best garbage collector. Reuse old objects when you can. e.g. if you know that a gun can fire a stream of 10 bullets, then you make a pool of 10 bullet structures and keep them around permanently (instead of a making a new one for each shot). Make objects bigger when you can -- 1 object of size 10 is better than 10 objects of size 1. In C# use structs over classes where possible, and flat arrays instead of more advanced collections. e.g. if you make an array of 20000 chunk structures, that would hopefully be a single 'object' as far as the GC is concerned.




#5300273 you all say oculus rift but why not google glass?

Posted by Hodgman on 11 July 2016 - 06:31 PM

^^ That. It's AR vs VR.
In some ways, AR will be the VR killer, when it's good enough.
At the moment though, AR systems have terrible FOV, terrible resolution, terrible cost, etc -- when compared with current VR systems.

And it's dead.

Not dead, just back into stealth mode.
It's Microsoft's turn to bring out a prototype (HoloLens) and you can be sure that Google will be back in the game when/if they succeed in bringing it to market.




#5300086 Linux for game development

Posted by Hodgman on 10 July 2016 - 11:25 PM

Why don't you just write the gameplay code in c++?

That's pretty much off-topic when it comes to choosing a linux repo...  :huh:

Why not ditch both and use Java? Because now we're in a language war thread :P

Georger.araujo's link is a pretty standard reason as to why people often use a higher level language for gameplay code.




#5300079 pong sound

Posted by Hodgman on 10 July 2016 - 10:19 PM

None of these are specific to pong sounds but should work for you:

 

http://www.fmod.org/

http://www.portaudio.com/

http://www.libsdl.org/

http://icculus.org/SDL_sound/

http://liballeg.org/

http://www.sfml-dev.org/

http://kcat.strangesoft.net/openal.html

https://github.com/R4stl1n/cAudio

http://www.xiph.org/ao/

http://www.ambiera.com/irrklang/

http://www.un4seen.com/

http://clam-project.org/




#5299421 Is DirectXMath thread safe?

Posted by Hodgman on 06 July 2016 - 08:58 PM

They're POD types. If you've externally synchronized them correctly*, then yes, it should be safe for multiple threads to share read-only access to a POD type.

 

*This means that there are appropriate barriers between the code that initializes the data, and the many threads that read the data. The simplest way to ensure this is correct is to wrap the initialization code in a mutex / critical section.




#5299394 Frame buffer speed, when does it matter?

Posted by Hodgman on 06 July 2016 - 04:18 PM

GPU ALU (computation) speeds keep getting faster and faster -- so if a shader was ALU-bottlenecked on an old GPU, on a newer GPU with faster ALU processing, that same shader might would likely become memory bottlenecked -- so faster GPUs need faster RAM to keep up :)

 

Any shader that does a couple of memory fetches is potentially bottle-necked by memory.

Say for example that a memory fetch has an average latency of 1000 clock cycles, and a shader core can perform one math operation per cycle. If the shader core can juggle two thread(-groups) at once, then an optimal shader would only perform one memory fetch per 1000 math operations.

e.g. say the shader was [MATH*1000, FETCH, MATH*1000] then the core would start on thread-group #1, do 1000 cycles of ALU work, perform the fetch, and have to wait 1000 cycles for the result (before doing the next 1000 cycles of work). While it's blocked here though, it will switch to thread-group #2, and do it's first block of 1000 ALU instructions. By the time it gets to thread-group #2's FETCH instruction, (which forces it to block/wait a 1000 cycle memory latency), the results of thread-group #1's fetch will have arrived from memory, so the core can switch back to thread-group #1 and perform its final 1000 ALU instructions. By the time it's finished doing that, thread-group #2's memory fetch will have completed, so it can go on finishing thread-group #2's final 1000 ALU instructions.

 

If a GPU vendor doubles the speed of their ALU processing unit -- e.g. it's now 2 ALU-ops per cycle, then it doesn't really make this shader go much faster:

The core initially does thread-group #1's first block of 1000 ALU instructions in just 500 cycles, but then hits the fetch, which will take 1000 cycles. So as above, it switches over to processing thread-group #2 and performs it's first block of 1000 ALU instructions in just 500 cycles... but now we're only just 500 cycles into a 1000 cycle memory latency, so the core has to go idle for 500 cycles, waiting for thread-group #1's fetch to finish.

The GPU vendor would also have to halve their memory latency in order to double the speed of this particular shader.

 

Increasing memory speed is hard though. The trend is that processing speed improves 2x every 2 years, but memory speed improves 2x every 10 years... in which time processing speed has gotten 32x faster... so over a 10 year span, memory speed tends to actually get 16x slower when compared to processing speeds :o

Fancy new technologies like HBM aren't really bucking this trend; they're clawing to keep up with it.

 

So GPU vendors have other tricks up their sleeve to reduce observed memory latency, independent of the actual memory latency. In my above example, the observed memory latency is 0 cycles in the first GPU, and 500 cycles on the second GPU, despite the actual memory latency being 1000 cycles in both cases. Adding more concurrent thread-groups allows the GPU to form a deep pipeline and keep the processing units busy while performing these very latent memory fetches.

 

So as a GPU vendor increases their processing speed (at a rate of roughly 2x every 2 years), they also need to increase their memory speeds and/or the depth of their pipelining. As above, as an industry, we're not capable of improving memory at the same rate as we improve processing speeds... so GPU vendors are forced to improve memory speed when they can (when a fancy new technology comes out every 5 years), and increase pipelining and compression when they can't.

 

On that last point -- yep, GPUs also implement a lot of compression on either end of a memory bus in order to decrease the required bandwidth. E.g. DXT/BC texture formats don't just reduce the memory requirements for your game; they also make your shaders run faster as they're moving less data over the bus! Or more recently: it's pretty common for neighbouring pixels on the screen to have similar colours, so AMD GPUs have a compression algorithm that exploits this fact - to buffer/cache pixel shader output values and then losslessly block-compress them before they're written to GPU-RAM. Some GPUs even have hardware dedicated to implementing LZ77, JPEG, H264, etc...

Besides hardware-implemented compression, compressing your own data yourself has always been a big optimization issue. e.g. back on PS3/Xb360 games, I've shaved a good number of milliseconds off the frame-time by changing all of our vertex attributes from being 32 bit floats, to being a mixture of 16 bit float and 16/11/10/8 bit fixed point values, reducing the vertex shader's memory bandwidth requirement by over half.




#5299074 Unity vs Unreal Physics for driving game

Posted by Hodgman on 04 July 2016 - 07:46 PM

Depending on the type of driving game, you'll be writing a lot of the vehicle physics yourself, and just using the underlying physics engine for collision detection and integration.




#5298956 Porting OpenGL to Direct3D 11 : How to handle Input Layouts?

Posted by Hodgman on 03 July 2016 - 09:56 PM

While writing the abstraction i hit a bit of a road block : Input Layouts. So to my knowledge in Direct3D 11 you have to define Input Layout per shader (by providing Shader Bytecode). Whereas in OpenGL you have to make glVertexAttribPointer calls for each attribute

It's not per-shader, but per vertex shader input structure. If two shaders use the same vertex structure as their input, they can share an Input Layout. The bytecode parameter when creating an IL is actually only used to extract the shader's vertex input structure and pair it up with the attributes described in your descriptor.
In my engine, I never actually pass any real shaders into that function -- I compile dummy code for each of my HLSL vertex structures which is only used during IL creation -- e.g. given some structure definitions:
StreamFormat("colored2LightmappedStream",  -- VBO attribute layouts
{
	[VertexStream(0)] = 
	{
		{ Float, 3, Position },
	},
	[VertexStream(1)] = 
	{
		{ Float, 3, Normal },
		{ Float, 3, Tangent },
		{ Float, 2, TexCoord, 0 },
		{ Float, 2, TexCoord, 1, "Unique_UVs" },
		{ Float, 4, Color, 0, "Vertex_Color" },
		{ Float, 4, Color, 1, "Vertex_Color_Mat" },
	},
})
VertexFormat("colored2LightmappedVertex",  -- VS input structure
{
	{ "position",  float3, Position },
	{ "color",	   float4, Color, 0 },
	{ "color2",    float4, Color, 1 },
	{ "texcoord",  float2, TexCoord, 0 },
	{ "texcoord2", float2, TexCoord, 1 },
	{ "normal",    float3, Normal },
	{ "tangent",   float3, Tangent },
})
StreamFormat("basicPostStream",  -- VBO attribute layouts
{
	[VertexStream(0)] = 
	{
		{ Float, 2, Position },
		{ Float, 2, TexCoord },
	},
})
VertexFormat("basicPostVertex",  -- VS input structure
{
	{ "position", float2, Position },
	{ "texcoord", float2, TexCoord },
})
this HLSL file is automatically generated and then compiled by my engine's toolchain, to be used as the bytecode when creating IL objects:
/*[FX]
Pass( 0, 'test_basicPostVertex', {
	vertexShader = 'vs_test_basicPostVertex';
	vertexLayout = 'basicPostVertex';
})*/
float4 vs_test_basicPostVertex( basicPostVertex inputs ) : SV_POSITION
{
	float4 hax = (float4)0;
	hax += (float4)(float)inputs.position;	hax += (float4)(float)inputs.texcoord;	 return hax;
}
/*[FX]
Pass( 1, 'test_colored2LightmappedVertex', {
	vertexShader = 'vs_test_colored2LightmappedVertex';
	vertexLayout = 'colored2LightmappedVertex';
})*/
float4 vs_test_colored2LightmappedVertex( colored2LightmappedVertex inputs ) : SV_POSITION
{
	float4 hax = (float4)0;
	hax += (float4)(float)inputs.position;	hax += (float4)(float)inputs.color;	hax += (float4)(float)inputs.color2;	hax += (float4)(float)inputs.texcoord;	hax += (float4)(float)inputs.texcoord2;	hax += (float4)(float)inputs.normal;	hax += (float4)(float)inputs.tangent;	 return hax;
}

Won't claim this is the best/only way of doing this, but I define a "Geometry Input" object that is more or less equivalent to a VAO. It holds a vertex format and the buffers that are bound all together in one bundle. The vertex format is defined identically to D3D11_INPUT_ELEMENT_DESC in an array. In GL, this pretty much just maps onto a VAO. (It also virtualizes neatly to devices that don't have working implementations of VAO. Sadly they do exist.) In D3D, it holds an input layout plus a bunch of buffer references and the metadata for how they're bound to the pipeline.

The only problem with that is that an IL is a glue/translation object between a "Geometry Input" and a VS input structure -- it doesn't just describe the layout of your geometry/attributes in memory, but also describes the order that they appear in the vertex shader. In the general case, you can have many different "Geometry Input" data layouts that are compatible with a single VS input structure -- and many VS input structures that are compatible with a single "Geometry Input" data layout.
i.e. in general, it's a many-to-many relationship between the layouts of your buffered attributes in memory, and the structure that's declared in the VS.
 
In my engine:
* the "geometry input" object contains a "attribute layout" object handle, which describes how the different attributes are laid out within the buffer objects.
* the "shader program" object contains a "vertex layout" object handle, which describes which attributes are consumed and in what order.
* When you create a draw-item (which requires specifying both a "geometry input" and a "shader program"), then a compatible D3D IL object is fetched from a 2D table, indexed by the "attribute layout" ID and the "vertex layout" ID.
* This table is generated ahead of time by the toolchain, by inspecting all of the attribute layouts and vertex layouts that have been declared, and creating input layout descriptors for all the compatible pairs.


#5298885 draw lights in render engine

Posted by Hodgman on 03 July 2016 - 06:22 AM

Those are both different implementations of forward lighting. (1) Is single pass forward lighting and (2) is multi-pass forward lighting.
(2) Used to be popular back before shaders, or with the early shader models.
(1) Replaced it when shaders became flexibly enough.

They should both produce the same visual result -- except if you're not doing HDR (e.g. are using an 8-bit back buffer). In that situation, (2) will have an implicit saturate(result) at the end of every light, whereas (1) will only have this implicit clamp right at the end of the lighting loop.
 
There's also a middle-ground that prevents a technique explosion -- stop at material + N lights, and use
foreach(model in models) 
{ 
  for( i = 0; i < model.HitLights; i += N )
    model.draw(model.material,  model.HitLights.SubRange(i, i+N) );
}
Or another alternative -- you used to pre-compile many shader permutations (material + 1 light, material + 2 lights ...) because using a dynamic loop inside a shader used to be extremely slow.
These days, loops are pretty damn cheap though, so you can just put the number of a lights (and an array of light data) into a cbuffer and use a single shader technique for any number of lights in one pass.


#5298794 Copy texture from back buffer in Directx12

Posted by Hodgman on 02 July 2016 - 06:30 AM

If it's crashing, are you just not checking for success/failure? If it's failing, make sure you've got the directx debug layer installed.




#5298780 Does adding Delegates/Function pointers to an entity break ECS ideology?

Posted by Hodgman on 02 July 2016 - 02:10 AM

^What Josh said. ECS is not one particular pattern.

I'd say "pure ECS ideology" means: Entity doesn't has any logic, and components either. Logic goes into your systems. Entities are IDs, components are data, systems have all the logic. End of story.
 
So you wouldn't do any of what you mentioned (neither adding functions to an entity nor adding functions to a component).

Well a delegate is data, so that fits your description. A "CallDelegateOnConditionMet" system would have components containing game-state conditions to check, and delegates for the system to call when those conditions have been met :lol:




#5298626 Axis orientation

Posted by Hodgman on 30 June 2016 - 06:12 AM

By default, with no camera logic or anything, the hardware itself assumes that x is across the screen to the right, y is either up or down the screen, and z is either into or out of the screen.

 

On top of that, you can build any convention that you like.

Often games use a right handed coord system -- hold up your thumb and first two fingers, with thumb point right, index pointing up, and middle finger pointing towards you -- that's X, Y and Z.

Other games use right handed with Z as up -- thumb right, index finger away from you, middle finger up.

 

Other games use left handed coordinate system... z up, z in, z out, etc...

There's no "standard" :(

 

Back in the 80's and early 90's, 3d level editors were usually just 2d applications with a top down view, which would show a floor-plan with X/Y axis, which meant that Z became up and down. That convention is still popular with a lot of level designers, or anyone who's used CAD software.

Nowadays, Z defaults to in/out of the screen as I said at the start, so other games keep that convention and use Y as up and down in the world...

 

Then you've also got to define your rotational conventions. Is a positive X-axis rotation clockwise when looking from the origin out along +X, or anti-clockwise? :lol:






PARTNERS