Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Online Last Active Today, 09:12 PM

#5194706 The "action" systems that 2D game frameworks all now have...

Posted by Hodgman on 25 November 2014 - 07:18 PM

Can you register for OnUpdate/etc events?

Lots of game engines I've used in the past were entirely event based, but a damn lot of stuff always ended up an Entity's called-once-every-frame OnUpdate event.

#5194674 real time reflections from arbitory surfaces.

Posted by Hodgman on 25 November 2014 - 03:39 PM

Moriffy, you can't just post a link by itself. You also need to post some words/thoughts alongside the link.

Did you create these videos?
Do you want feedback on them?
Do you want to discuss the technique used in them?
Are you just impressed/excited by the videos?
Is the video demonstrating a product or a generally known technique?
How does it work?

What are the pros and cons of this technique?
Is it general ray-tracing?
Does it only work with cubes?
What kinds of BRDFs does it support?

Can we use this technique?

In other words, what kind of replies are you expecting...?

#5194500 Constructors, Factory Methods and Destructor Questions

Posted by Hodgman on 24 November 2014 - 05:25 PM

@Hodgman: The idea of having leaner constructors make quite a bit of sense. I usually try to do lots of initial setting stuff in my constructors, and file loading. I should probably move the file loading to a factory.

I left it at 'people will argue either way', but I personally am a fan of complex constructors happy.png
For a general purpose file loading library, yeah, I'd prefer a factory function that can either return a file object, or return null/error.
But for a game asset loading library, there's no errors that can occur -- If an asset is missing from the game directory, you blame the user for deleting your data files, and then you crash. So I'd have no issues at all with the constructor of a GameAssetFile class doing complex work like obtaining file pointers, kicking off asynchronous HDD reads, etc...

Another downside to be aware of though, is when you use a complex constructor (and no default constructor), you're limiting your class from being usable in some places, e.g. std::vector<GameAssetFile> will no longer work (but std::vector<GameAssetFile*> will).

Some examples for the original questions; I have these two helper classes:

class NonCopyable
	NonCopyable( const NonCopyable& );
	NonCopyable& operator=( const NonCopyable& );

class NoCreate
	NoCreate( const NoCreate& );
	NoCreate& operator=( const NoCreate& );

I use NonCopyable when I don't have a need for a class to be able to make copies of itself and/or when implementing copying would be too complex. I'm a big fan of YAGNI and KISS, so if I don't need a copy operator, I don't write it.

class SomeComplexThing : NonCopyable
  SomeComplexThing(const char* filenameToOperateOn);

Without inheriting from NonCopyable, the above class would be in breach of the rule of three. Inheriting NonCopyable satisfies the rule, while not requiring me to actually implement copying. If a user tries to copy an object of this class, they'll get a compile time error saying it's not allowed.
NoCreate is a lot more esoteric. I use it as a helper for a type of non-polymorphic interfaces, something similar to the PIMPL pattern.

class Doodad : NoCreate
  int GetHealth();
  std::string GetName();
//user cannot create/destroy Doodads, only get pointers to the ones owned by the complex thing
class SomeComplexThing : NonCopyable
  SomeComplexThing(const char* filename);
  int GetDoodadCount() const;
  Doodad* GetDoodadByIndex(int i) const;
  void* buffer;

//cpp file
SomeComplexThing::SomeComplexThing(const char* fn) : bufffer(LoadFile(fn)) {}
SomeComplexThing::~SomeComplexThing()                      { FreeFile(buffer); }

struct DoodadFile //implementation of the Doodad interface
  int count;
  struct Item
    int health;
    char name[64];
  } items[];

int SomeComplexThing::GetDoodadCount() const
  DoodadFile* d = (DoodadFile*)buffer;
  return d->count;
Doodad* SomeComplexThing::GetDoodadByIndex(int i) const
  DoodadFile* d = (DoodadFile*)buffer;
  if( i >=0 && i < d->count )
    return (Doodad*)&d->items[i];
  return 0;

int Doodad::GetHealth()
  DoodadFile::Item* self = (DoodadFile::Item*)this;
  return self->health;
std::string Doodad::GetName()
  DoodadFile::Item* self = (DoodadFile::Item*)this;
  return std::string(self->name);


I'd like to mention is that it sounds like inheritance should be used sparingly in C++ whereas other languages like Objective-C and C# thrive off of it.

I wouldn't say "sparingly". Inheritance is used alot, but it's just not the first tool you reach for.


I would say that in every OO language, inheritance should be use sparingly, but unfortunately many people suffer from inheritance-addiction.
As far as I'm concerned "prefer composition over inheritance" is one of the core rules of OO design. I won't rant again because I just posted one here laugh.png
But for what it's worth, C# has good support for inheritance and amazing support for composition as well.

#5194380 A few questions about viewports

Posted by Hodgman on 24 November 2014 - 04:56 AM

That's pretty much right except the world/view/proj matrices map into the -1 to 1 range (a.k.a. Normalised device coordinates), and then yeah the viewport remaps into pixel coordinates.

Viewport coords are in pixels, yeah, so they're independent of the render target size.
However, D3D9 has a quirk where whenever you set a render target, the viewport is automatically changed to be the same size as that full texture's size... So always remember to set the viewport after you set your target (and re-set it if you change targets and want your viewport to persist).

#5194358 Time correction when changing a value based on Time passed.

Posted by Hodgman on 23 November 2014 - 10:19 PM

Alright, in our game whenever the player jumps we calculate when he should start falling based on how long he has been jumping for.  This causes issues when the game is running at the fastest setting, since the deltaTime is added more frequently. I tried normalizing the vector returned by muliplying by deltaTime, but that just made the jump return a very tiny movement vector everytime. So, how could I go about normalizing the deltaTime?

Here's our code:

private Vector3 OnPress()
	jumpTimer = 0;
	jumpVelocity = new Vector3(0f, 11.5f, 0f);
	return jumpVelocity;

private Vector3 Rising()
	jumpTimer += Time.deltaTime;
	jumpVelocity = (gravity * jumpTimer) + jumpVelocity;
	return jumpVelocity;

private Vector3 Dropping()
	jumpTimer += Time.deltaTime;
	jumpVelocity = gravity * jumpTimer;
	return jumpVelocity;

private Vector3 Landing()
	jumpVelocity = Vector3.zero;
	jumpTimer = 0f;
	return jumpVelocity;
I believe the issue is within Dropping and Rising. Thanks.

Rewriting your current code, it's pretty much this:
	acceleration = 0;
	velocity = (0, 11.5, 0);
	acceleration += deltaTime * gravity;
	velocity += acceleration; //n.b. deltaTime not used here!!!
	acceleration += deltaTime * gravity;
	velocity = acceleration; //n.b. = used, not += ???
	velocity = (0,0,0);
	acceleration = 0;
...which shows that your equations of motion are wrong. Velocity is updated without any respect to delta time, which is why it will vary greatly with framerate.

Also, I'm not sure why rising and dropping need different update functions. You should be able to just use something like this:
	acceleration = 0;
	velocity = (0, 11.5, 0);
	acceleration += deltaTime * gravity;
	velocity += deltaTime * acceleration;
	velocity = (0,0,0);
	acceleration = 0;
Or an alternate version:
        onGround = false;
	impulse = (0, 11.5, 0);
	onGround = true;
	impulse = (0, -velocity.y, 0);
        if( !onGround )
		acceleration += gravity * deltaTime ;
	velocity += acceleration * deltaTime + impulse;
	impulse = (0,0,0);
^^Both of those versions update velocity using deltaTime, so they should be more stable. However, they will still give slightly different results with different frame-rates, due to them being an approximate numerical integration of the motion curve.

To solve that, you can either use a fixed timestep (as mentioned by everyone above), or you can use a version that is based on absolute time values, instead of delta time values, which makes it perfectly deterministic and always works the same regardless of framerate:
        onGround = false;
	initialJumpHeight = height;
	initialJumpVelocity = 11.5;
	timeAtJump = timeNow;
	onGround = true;
	if( !onGround ) {
		timeSinceJump = timeNow - timeAtJump;
		//motion under constant acceleration: o' = o + ut + att/2
		//(o'=new pos, o=initial pos, u=intiial velocity, t=time since initial conditions, a=acceleration)
		height = initialJumpHeight + initialJumpVelocity*timeSinceJump + 0.5*gravity*timeSinceJump*timeSinceJump;
	return height;

#5194347 Is it possible to do Batch Rendering with 3D skeletal animation data?

Posted by Hodgman on 23 November 2014 - 07:34 PM

If I want each stick to be animated and positioned correctly, I'd need three 4x3 transformation matrices to define how to move and rotate each of the sticks in 3D space.  If I do what you propose, I'd have 3 indices per vertex, and an array containing three 4x3 matrices?  This doesn't scale very well as a model will have 100's of vertices which means 100's of 4x3 matrices.

I don't understand how you've come up with three in the bolded bit. Each stick only has two bone (head/foot), so each stick has two matrices. Each vertex also only has one bone-index because it's either connected to the head, or to the feet.
It doesn't matter how many vertices are in the feet/head. Per object, you have one 'feet' transform and one 'head' transform.
Transform buffer: {Head0, Feet0, Head1, Feet1, Head2, Feet2...}
Vertex Buffer if the head was made up of 2 verts and the feet also of two verts:

//stick 0's verts
  {pos={a,b,c},uv={d,e},bone={0/*aka Head0*/}},
  {pos={f,g,h},uv={i,j},bone={0/*aka Head0*/}},
  {pos={k,l,m},uv={n,o},bone={1/*aka Feet0*/}},
  {pos={p,q,r},uv={s,t},bone={1/*aka Feet0*/}},
//stick 1's verts
  {pos={u,v,w},uv={x,y},bone={2/*aka Head1*/}},
  {pos={z,A,B},uv={C,D},bone={2/*aka Head1*/}},
  {pos={E,F,G},uv={H,!},bone={3/*aka Feet1*/}},

In the vertex shader, you then do something like:

  int boneIndex = vertex.bone;
  Vec4 transform0 = TransformBuffer.Load(boneIndex*3+0);//index*3 because we have 3 Vec4's per transform
  Vec4 transform1 = TransformBuffer.Load(boneIndex*3+1);
  Vec4 transform2 = TransformBuffer.Load(boneIndex*3+2);
  Mat4 transform = Mat4( transform0, transform1, transform2, vec4(0,0,0,1) );
  Vec3 worldPosition = mul(transform, Vec4(vertex.position,1) );

Then as an extension to this, you can get "skinning" (soft transitions between bones) by using more than one bone index per vertex.
e.g. A vertex that's 75% controlled by the head bone, but 25% by the feet bone:
  {pos={a,b,c},uv={d,e},bones={0/*aka Head0*/, 1/*aka Feet0*/}, weights={0.75,0.25}},

Then a VS that loads multiple bone indexes and blend weights for each one.

  int boneIndex0 = vertex.bones.x;
  Vec4 transform0_0 = TransformBuffer.Load(boneIndex0*3+0);
  Vec4 transform0_1 = TransformBuffer.Load(boneIndex0*3+1);
  Vec4 transform0_2 = TransformBuffer.Load(boneIndex0*3+2);

  int boneIndex1 = vertex.bones.y;
  Vec4 transform1_0 = TransformBuffer.Load(boneIndex1*3+0);
  Vec4 transform1_1 = TransformBuffer.Load(boneIndex1*3+1);
  Vec4 transform1_2 = TransformBuffer.Load(boneIndex1*3+2);

  Vec4 transform0 = transform0_0 * vertex.weights[0] + transform1_0 * vertex.weights[0];
  Vec4 transform1 = transform0_1 * vertex.weights[0] + transform1_1 * vertex.weights[0];
  Vec4 transform2 = transform0_2 * vertex.weights[0] + transform1_2 * vertex.weights[0];

  Mat4 transform = Mat4( transform0, transform1, transform2, vec4(0,0,0,1) );

p.s. the above code does horrible linear blending of matrices, which doens't produce very good quality. Often animation systems will use a quaternion + a vec3 scale + a vec3 position, blending them individually, and then using those blended results to construct a Mat4x4.
p.p.s. Half-Life 1 in 1998 was one of the first games I know of that pioneered "skinned animation" and it's been the defacto standard character animation technique ever since. It's common these days to have characters with, say, 10k verts and 50 bone matrices. Nextgen even more like 100k verts and 150 bone matrices.

#5194250 General question: how to keep JavaScript organized

Posted by Hodgman on 23 November 2014 - 06:04 AM

In Java, for example, you would get this result by defining an interface ... In C++ you could create two base classes modules [and] multiple inheritance
But those designs are really workarounds for the weakness of the OOP model: you cannot modify objects at run time, except to modify their properties' values.

That's not an issue with OO itself. Firstly it's a side-effect of inheritance-addiction like I mentioned above, and secondly a problem caused by static typing.
The reason JS excels here is because it has duck typing, which lets you dynamically use properties without statically defining structures. From a C++/Java-esque background, this seems like "everything is virtual", but of course is so much more than that.

var aDuck = { quack = function(){alert('quack');}; };
var aPerson = { quack = function(){alert('Hello!');}; };
function quack( who ){ who.quack(); }

C++ only has duck typing inside of templates, e.g.

struct Duck { void Quack() { print("quack"); } };
struct Person { void Quack() { print("Hello!"); } };
template<class T> void Quack( T& object ) { object.Quack(); }
void Test() {
  Duck aDuck;
  Person aPerson;

And yes as you mention, at other times where static typing is enforced more strongly, you can instead use interfaces like this in C++ (Java is identical, except the syntax differs):

struct IQuackable { virtual void Quack() = 0; }
struct Duck : public virtual IQuackable { void Quack() { print("quack"); } };
struct Person : public virtual IQuackable { void Quack() { print("Hello!"); } };
void Quack( IQuackable& quackable ) { quackable.Quack(); }
void Test() {
  Duck aDuck;
  Person aPerson;

However... that's not the only solution that 'OO' offers. A core rule of OO is that you should default to using composition, and only use inheritance where necessary (Java shits all over this rule, so IMHO I wouldn't choose to recognize it a real OO language until it agrees to an intervention tongue.png).

JS has first-class functions, and C++ almost does (relying on library support over language support) -- which lets you rewrite the above interface/inheritance based solution as a function/composition based one, like this:

typedef std::function<void()> Quacker;
struct Duck { void Quack() { print("quack"); } Quacker GetQuacker() { return std::bind(&Quack, this, _1)); } };
struct Person { void Quack() { print("Hello!"); } Quacker GetQuacker() { return std::bind(&Quack, this, _1)); } };
void Quack( FnQuack& quacker ) { quacker(); }
void Test() {
  Duck aDuck;
  Person aPerson;

Again, after learning JS properly, trying bringing back all the wonderful new perspectives of it's paradigms back to other languages like C++ biggrin.png


C# also does pretty well in this regard. Anonymous functions, closures, delegates, events and generics are great tools to have in your toolbox. In C# 4 there's actually full blown duck typing available! Another way to practice JS-style thinking to to just try using these other tools instead of interfaces all the time in the languages you already know.

#5194249 General question: how to keep JavaScript organized

Posted by Hodgman on 23 November 2014 - 05:37 AM

If you search for "right way to learn JavaScript", there's a lot of decent advice on how not to fall into the abundant traps that exist due to the immense about of bad JS that exists out there.

BTW, js's particular flavour of OO is the prototype paradigm -

Lastly, the kind of inheritance that C++/Java are renowned for shouldn't actually be very common in good C++ code. This over-use of inheritance is a remnant of the 90's when OOP was a new fad that everyone dived into without actually groking, resulting in lots of bad code, and then lots of people learning from that bad code... Much like JS :)
Writing C++ using composition over inheritance and lots of std::function instead of virtual would be a great eye opener if you're a C++ coder that wants to gain new perspectives on OOP.
Learning JS (properly) will be similarly eye opening.

#5194221 Audio API

Posted by Hodgman on 22 November 2014 - 10:11 PM

Note that with FMOD it's actually split into two API - a high level one based around its tools, and a low level one based around actually mixing audio data.
I'm not much of an audio programmer either, but AFAIK you can write plugins to actually generate or process audio samples/streams yourself.

Also AFAIK, a lot of audio stuff seems to be done in software these days instead of using dedicated mixing hardware, with largely pre-mixed data being sent to the OS-level APIs in the end.

#5194215 Book on Physics-Base Rendering

Posted by Hodgman on 22 November 2014 - 08:59 PM

Look up the annual SIGGRAPH Course "Physically Based Shading in Theory and Practice" - the notes are usually put online not long after the conference.
It covers a lot, and adjusts every year with new developments.

#5194214 Is it possible to do Batch Rendering with 3D skeletal animation data?

Posted by Hodgman on 22 November 2014 - 08:43 PM

Use an array of transforms.
Each vertex can store an index into that array - or more commonly 4 indices and 4 weights for smooth 'skinning' transitions at the elbows, etc...
Then to draw multiple characters in one batch, have each instance store an offset to add to each vertex's bone-index.

#5194206 Constructors, Factory Methods and Destructor Questions

Posted by Hodgman on 22 November 2014 - 07:40 PM

If a class is designed to be used as a "base class" for use with concrete/implementation inheritance (AKA 'extends' inheritance) then it's common for the constructor (and/or destructor) to be protected if this base class is not functional without being extended first.

Making destructors private makes it impossible to use the delete keyword on those objects. If you then make a factory class a friend then you can force people into calling factory::release instead of deleting the object themselves.

If the class doesn't have virtual methods and inheritance, there's no need for a virtual destructor.
If a class is polymorphic, i.e. has other virtual functions and is accessed via pointers of types other than it's actual type, then the rule of thumb is to always have a virtual destructor.
The actual issue that virtual destructors solve, is it allows users to use the delete keyword using a pointer to a parent/interface class. Without a virtual destructor, that parent class' destructor will be called, but the derived type's one won't.

Inheritance is very rare in a lot of code, so virtual destructors will be in those same areas.

Constructors that do a lot of work are a bad idea if those operations can fail.
Otherwise it's just a code style choice; people will argue either way.

#5194066 Audio API

Posted by Hodgman on 21 November 2014 - 07:44 PM

Yeah AFAIK, OpenAL Soft is the successor.

FMOD is an extremely popular choice, or Wwise if you have money wink.png

#5193957 C# .Net Open Source

Posted by Hodgman on 21 November 2014 - 07:14 AM

If you follow a few simple practices (and sometimes a few complex practices) this ["no way to get away from the 'stop the world' garbage collector"] is actually an amazing feature of the languages.

The GC may be amazing, but why is barring you from having any control an amazing feature? Wouldn't it be nice if you could choose to opt in to specifying the sizes of the different heaps, hinting at good times to run the different phases, specifying runtime limits, providing your own background threads instead of automatically getting them, etc? Would it harm anything to allow devs to opt in to that stuff? Do the amazing features require the GC to disallow these kinds of hints?

With modern versions of both Java and C# ... On rare occasions  [when GC runs at the wrong time, it consumes] on the order of 1/10,000 of your frame time.

16.667ms / 10000 = 1.7 microseconds
Having seen GC's eat up anywhere from 1-8ms per frame in the past (when running on a background low-priority thread), claims of 1μs worst-case GC times sound pretty unbelievable -- the dozen cache misses involved in a fairly minimal GC cycle would alone cost that much time!
I know C# has come a long way, but magic of that scale that are justifiably going to be met with some skepticism.
Combine that skepticism with the  huge cost involved in converting an engine over to use a GC as it's core memory management system, and you've got still a lot of resistance in accepting them.
Also, often it's impossible to do an apples to apples comparison because the semantics used by the initial allocation strategies and the final GC strategy end up being completely different, making it hard to do a valid real world head-to-head too...

while your program has some spare time on any processor (which is quite often)

Whether it's quite often or not entirely depends on the game. If you're CPU bound, then the processor might never be idle. In that case, instead of releasing your per-frame allocations every frame, they'll build up until some magical threshold out of your control is triggered, causing a frame-time hitch as the GC finally runs in that odd frame.

Also when a thread goes idle, the system knows that it's now safe to run the GC... but the system cannot possibly know how long it will be idle for. The programmer does know that information though! The programmer may know that the thread will idle for 1 microsecond at frame-schedule point A, but then for 1 millisecond at point B.
The system sees both of those checkpoints as equal "idle" events and so starts doing a GC pass at point A. The programmer sees them as having completely different impacts on the frame's critical path (and thus frame-time) and can explicitly choose which one is best, potentially decreasing their critical path.

In C++ ... collection (calling delete or free) takes place immediately ... this universally means that GC runs at the worst possible time, it runs when the system is under load.

I assume here we're just dealing with the cost in updating the allocator's management structures -- e.g. merging the allocation back into the global heap / the cost of the C free function / etc?

In most engines I've used recently, when a thread is about to idle, it first checks in with the job/task system to see if there's any useful work for it to do instead of idling. It would be fairly simple to have free push the pointer into a thread-local pending list, which kicks a job to actually free that list of pointers once some threshold is reached.
I might give it a go biggrin.png Something like this for a quick attempt I guess.
However, the cost of freeing an allocation in a C++ engine is completely different to the (amortized) cost of freeing an allocation with a GC.
There's no standard practice for handling memory allocation in C++ -- the 'standard' might be something like shared_ptr, etc... but I've rarely seen that typical approach make it's way into game engines.
The whole time I've been working on console games (PS2->PS4), we've used stack allocators and pools as the front-line allocation solutions.

Instead of having one stack (the call stack) with a lifetime of the current program scope, you make a whole bunch of them with different lifetimes. Instead of having the one scope, defined by the program counter, you make a whole bunch of custom scopes for each stack to manage the sub-lifetimes within them. You can then use RAII to tie those sub-lifetimes into the lifetimes of other objects (which might eventually lead back to a regular call-stack lifetime).
Allocating an object from a stack is equiv to incrementing a pointer -- basically free! Allocating N objects is the exact same cost.
Allocating an object from a pool is about just as free -- popping an item from the front of a linked list. Allocating N objects is (N * almost_free).
Freeing any number of objects from a stack is free, it's just overwriting the cursor pointer with an earlier value.
Freeing an object from a pool is just pushing it to the front of the linked list.



Also, while we're talking about these kinds of job systems -- the thread-pool threads are very often 'going idle' but then popping work from the job queue instead of sleeping. It's pretty rediculous to claim that these jobs are free because they're running on an otherwise 'idle' thread. Some games I've seen recently have a huge percentage of their processing workload inside these kinds of jobs. It's still vitally important to know how many ms each of these 'free' jobs is taking.

In the roughly 11 major engines I have worked with zero of them displaced the heap processing to a low priority process.

The low priority thread is there to automatically decide a good 'idle' time for the task to run. The engines I've worked with recently usually have a fixed pool of normal priority threads, but which can pop jobs of different priorities from a central scheduler. The other option is the programmer can explicitly schedule the ideal point in the frame for this work to occur.

I find it hard to believe that most professional engines aren't doing this at least in some form...?
When managing allocations of GPU-RAM, you can't free them as soon as the CPU orphans them, because the GPU might still be reading that data due to it being a frame or more behind -- the standard solution I've seen is to push these pointers into a queue to be executed in N frame's time, when it's guaranteed that the GPU is finished with them.
At the start of each CPU-frame, it bulk releases a list of GPU-RAM allocations from N frames earlier.
Bulk-releasing GPU-RAM allocations is especially nice, because GPU-RAM heaps usually have a very compact structure (instead of keeping their book-keeping data in scattered headers before each actually allocation, like many CPU-RAM heaps do) which can potentially entirely fit into L1.
Also, when using smaller, local memory allocators instead of global malloc/free everywhere, you've got thread safety to deal with. Instead of the slow/general-purpose solution of making your allocators all thread-safe (lock-free / surrounded by a mutex / etc), you'll often use a similar strategy to the above, where you batch up 'dead' resources (potentially using wait-free queues across many threads) and then free them in bulk on the thread that owns the allocator.
e.g. a Job that's running on a SPU might output a list of Entity handles that can be released. That output buffer forms and input to another job that actually performs the updates on the allocator's internal structures to release those Entities.
One engine I used recently implemented something similar to the Actor model, allowing typical bullshit style C++ OOP code to run concurrently (and 100% deterministically) across any number of threads. This used typical reference counting (strong and weak pointers) but in a wait-free fashion for performance (instead of atomic counters, an array of counters equal in size to the thread pool size). Whenever a ref-counter was decremented, the object was pushed into a "potentially garbage" list. Later in the frame schedule where it was provable that the Actors weren't being touched, a series of jobs would run that would aggregate the ref counters and find Actors who had actually been decremented to zero references, and then push them into another queue for actual deletion.
Lastly, even if you just drop in something like tcmalloc to replace the default malloc/free, it does similar work internally, where pointers are cached in small thread-local queues, before eventually being merged back into the global heap en batch.

When enough objects are ready to move to a different generation of the GC (in Mono the generations are 'Nursery', 'Major Heap', in Java they are "Young Collection" and "Old Space Collection") the threads referencing the memory are paused, a small chunk of memory is migrated from one location to another transparently to the application, and the threads are resumed.

Isn't it nicer to just put the data in the right place to begin with?
It's fairly normal in my experience to pre-create a bunch of specialized allocators for different purposes and lifetimes. Objects that persist throughout a whole level are allocated from one source, objects in one zone of the level from another, objects existing for the life of a function from another (the call-stack), objects for the life of a frame from another, etc...
Often, we would allocate large blocks of memory that correspond to geographical regions within the game world itself, and then create a stack allocator that uses that large block for storing objects with the same lifespan as that region. If short-lived objects exist within the region, you can create a long-lived pool of those short-lived objects within the stack (within the one large block).
When the region is no longer required, that entire huge multi-MB block is returned to a pool in one single free operation, which takes a few CPU cycles (pushing a single pointer into a linked list). Even if this work occurs immediately as you say is a weakness of most C++ schemes, that's still basically free, vs the cost of tracking the thousands of objects within that region with a GC...

On extremely rare occasions (typically caused by bad/prohibited/buggy practices) it will unexpectedly run when the system is under load, exactly like C++ except not under your control.

So no - the above C++ allocation schemes don't sound exactly like a GC at all tongue.png

#5193884 OpenGL Vs Monogame

Posted by Hodgman on 20 November 2014 - 05:39 PM

Every game is powered by a "game engine".
Drag'n'drop game-maker GUIs with visual scripting are not the only form of game engine.

If you choose to make a game without one, then you'll have built one by the time you're done.
The parts of your code that power the game, but aren't specific to the gameplay are "the engine".
Even if you make something simple like "pong" from scratch, you'll have built a "pong engine", which you can utilise to make other "pong-style" games, such as breakout.

I thought the game engine was the way for me to go, but after really looking into it I found them to be too much point and click and not enough actual coding. I really want to have full access to program user input, save states, collisions, etc on my own. While I know C# syntax I'm still trying to get use to all the useful classes and combining syntax to do certain things. I want as much hands on as possible when I make my games to give myself as much practice as possible, then in the future I can always move over to an engine of choice.

The first part isn't true - you'll still have to do a tonne of programming when using an existig engine.
If you don't use something like MonoGame, you'll just have to create your own version of it first, AND then build the game on top of your own "NotMonoGame" in exactly the same way that you would have done anyway.

As for the second part, if you're the kind of person who learns by doing, you'll probably be better off building your first games within existing, well designed, proven frameworks. Not only will you actually see results faster, but in the process of using these existing frameworks you'll be reading/using code written by expert game programmers, and gain a good understanding of how these base systems are often structured. Then later, when you try to build a game from scratch (AKA, buildnyour own engine) you'll already be a somewhat experienced game programmer, so you'll know what your engine should look like.

IMHO, trying to build an engine before you've built games is like trying to build a race-car before you've got a drivers license... Actually: before you've even ever driven in a car at all!
Sure it can be done, but an engineer/craftsman will do better to understand the users of their craft.

Even if your goal was to become a game-engine programmer, rather than a game programmer, I'd still advocate learning to make games on many different existing game engines first, so you understand the needs of your users (i.e. Game programmers) before trying to build your own engine.

Also don't underestimate the amount of work involved in either option.
When I worked in the games industry on low-budget games:
* when we used an existig engine, we had 2 engine programmers dedicated to modifying/maintaining that engine, plus a dozen gameplay programmers.
* when we used our own engine, we had two dozen engine/tools programmers and a dozen gameplay programmers.
For a simple 8-month game project, that's somewhere around 10 to 30 man-years of work, just on programming! Also, all of those staff had 5+ years of tertiary education/experience to begin with...

Completing any game as a 1-man band is a huge achievement to look forward to.