• Create Account

### #ActualHodgman

Posted 08 September 2012 - 07:01 AM

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.

C++? I'd hope they'd instead use a less error-prone language like Ada
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.

If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had the same maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.

For things that are truly variable size (levels etc)

IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines of
struct Widget
{
char*   name;
Vec3    position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
struct Widget
{
Offset<String> name;
Vec3           position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:
0  0x00000002                       //WidgetFile::widgets::count
4  0x00000028                       //widgets[0].name: 40 byte offset to {5,"Frank"}
8  0x00000000                       //widgets[0].position.x
C  0x00000000                       //widgets[0].position.y
10 0x00000000                       //widgets[0].position.z
14 0x00000000                       //widgets[0].parent: NULL
18 0x0000001B                       //widgets[1].name: 27 byte offset to {3,"Bob"}
1C 0x00000000                       //widgets[1].position.x
20 0x00000000                       //widgets[1].position.y
24 0x00000000                       //widgets[1].position.z
28 0xFFFFFFDC                       //widgets[1].parent: -36 byte offset to widgets[0]
2C \5Fra                            //*widgets[0].name
30 nk\0\3                           //*widgets[1].name
34 Bob\0

I would on the whole use things like fixed size memory allocators (and potentially other constant time allocators) for things that need to be created / destroyed dynamically (see my first post on page 1). You can use this for constant time incredibly fast allocations / deallocations, suitable for things like nodes in algorithms, even particle type systems.>

Yeah, I agree. In my engine, if something needs to allocate memory, then I have to pass it an appropriate allocator -- new/delete/malloc/free are banned (globals are bad).
And I don't mean that I pass around some abstract "Allocator", or even a fixed concept allocator (like the C++ containers use) -- different systems will require different concrete allocators (which might have different interfaces and semantics). An algorithm that needs to temporarily build a large list internally might need be be passed a stack of bytes to use as scratch memory, a system that spawns monsters might need to be passed a monster-pool, etc...
My bread-and-butter allocator (kind of equivalent to shared_ptr+new in general C++ code) is just called Scope (and is used with a custom 'new' keyword) - it uses a stack-allocator internally, but any 'newed' objects are bound to the lifetime of the Scope object (like the "automatic" / non-heap variables that we're used to). You don't have to delete them and can't leak them -- they're destructed when the scope object is destructed. Scope objects are usually allocated inside other scope objects, which we should all be used to. I find this a much simpler, more efficient and less error-prone way to manage heap allocations than the traditional C++ solutions. The start of my game might usually look something like:
MallocStack    memory( eiMiB(256) );
Scope          a( memory.stack, "main" );
eiNew(a, Game)(a, "foobar");

### #6Hodgman

Posted 08 September 2012 - 06:59 AM

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.

C++? I'd hope they'd instead use a less error-prone language like Ada
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.

If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had the same maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.

For things that are truly variable size (levels etc)

IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines of
struct Widget
{
char*   name;
Vec3    position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
struct Widget
{
Offset<String> name;
Vec3           position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:
0  0x00000002                       //WidgetFile::widgets::count
4  0x00000028                       //widgets[0].name: 40 byte offset to {5,"Frank"}
8  0x00000000                       //widgets[0].position.x
C  0x00000000                       //widgets[0].position.y
10 0x00000000                       //widgets[0].position.z
14 0x00000000                       //widgets[0].parent: NULL
18 0x0000001B                       //widgets[1].name: 27 byte offset to {3,"Bob"}
1C 0x00000000                       //widgets[1].position.x
20 0x00000000                       //widgets[1].position.y
24 0x00000000                       //widgets[1].position.z
28 0xFFFFFFDC                       //widgets[1].parent: -36 byte offset to widgets[0]
2C \5Fra                            //*widgets[0].name
30 nk\0\3                           //*widgets[1].name
34 Bob\0

I would on the whole use things like fixed size memory allocators (and potentially other constant time allocators) for things that need to be created / destroyed dynamically (see my first post on page 1). You can use this for constant time incredibly fast allocations / deallocations, suitable for things like nodes in algorithms, even particle type systems.>

Yeah, in my engine, if something needs to allocate memory, then I have to pass it an appropriate allocator -- new/delete/malloc/free are banned (globals are bad).
And I don't mean that I pass around some abstract "Allocator", or even a fixed concept allocator (like the C++ containers use) -- different systems will require different concrete allocators (which might have different interfaces and semantics). An algorithm that needs to temporarily build a large list internally might need be be passed a stack of bytes to use as scratch memory, a system that spawns monsters might need to be passed a monster-pool, etc...
My bread-and-butter allocator (kind of equivalent to shared_ptr+new in general C++ code) is just called Scope (and is used with a custom 'new' keyword) - it uses a stack-allocator internally, but any 'newed' objects are bound to the lifetime of the Scope object (like the "automatic" / non-heap variables that we're used to). You don't have to delete them and can't leak them -- they're destructed when the scope object is destructed. Scope objects are usually allocated inside other scope objects, which we should all be used to. I find this a much simpler, more efficient and less error-prone way to manage heap allocations than the traditional C++ solutions.

### #5Hodgman

Posted 08 September 2012 - 06:57 AM

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.

C++? I'd hope they'd instead use a less error-prone language like Ada
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.

If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had the same maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.

For things that are truly variable size (levels etc)

IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines of
struct Widget
{
char*   name;
Vec3    position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
struct Widget
{
Offset<String> name;
Vec3           position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:
0  0x00000002                       //WidgetFile::widgets::count
4  0x00000028                       //widgets[0].name: 40 byte offset to {5,"Frank"}
8  0x00000000                       //widgets[0].position.x
C  0x00000000                       //widgets[0].position.y
10 0x00000000                       //widgets[0].position.z
14 0x00000000                       //widgets[0].parent: NULL
18 0x0000001B                       //widgets[1].name: 27 byte offset to {3,"Bob"}
1C 0x00000000                       //widgets[1].position.x
20 0x00000000                       //widgets[1].position.y
24 0x00000000                       //widgets[1].position.z
28 0xFFFFFFDC                       //widgets[1].parent: -36 byte offset to widgets[0]
2C \5Fra                            //*widgets[0].name
30 nk\0\3                           //*widgets[1].name
34 Bob\0

I would on the whole use things like fixed size memory allocators (and potentially other constant time allocators) for things that need to be created / destroyed dynamically (see my first post on page 1). You can use this for constant time incredibly fast allocations / deallocations, suitable for things like nodes in algorithms, even particle type systems.>

Yeah, in my engine, if something needs to allocate memory, then I have to pass it an appropriate allocator -- new/delete/malloc/free are banned (globals are bad).
And I don't mean that I pass around some abstract "Allocator", or even a fixed concept allocator (like the C++ containers use) -- different systems will require different concrete allocators. An algorithm that needs to temporarily build a large list internally might need be be passed a stack of bytes to use as scratch memory, a system that spawns monsters might need to be passed a monster-pool, etc...
My bread-and-butter allocator (kind of equivalent to shared_ptr+new in general C++ code) is just called Scope - it uses a stack-allocator internally, but any 'newed' objects are bound to the lifetime of the Scope object (like the "automatic" / non-heap variables that we're used to). You don't have to delete them and can't leak them -- they're destructed when the scope object is destructed. Scope objects are usually allocated inside other scope objects, which we should all be used to. I find this a much simpler, more efficient and less error-prone way to manage heap allocations than the traditional C++ solutions.

### #4Hodgman

Posted 08 September 2012 - 06:50 AM

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.

C++? I'd hope they'd instead use a less error-prone language like Ada
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.

If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had the same maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.

For things that are truly variable size (levels etc)

IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines of
struct Widget
{
char*   name;
Vec3    position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
struct Widget
{
Offset<String> name;
Vec3           position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:
0  0x00000002                       //WidgetFile::widgets::count
4  0x00000028                       //widgets[0].name: 40 byte offset to {5,"Frank"}
8  0x00000000                       //widgets[0].position.x
C  0x00000000                       //widgets[0].position.y
10 0x00000000                       //widgets[0].position.z
14 0x00000000                       //widgets[0].parent: NULL
18 0x0000001B                       //widgets[1].name: 27 byte offset to {3,"Bob"}
1C 0x00000000                       //widgets[1].position.x
20 0x00000000                       //widgets[1].position.y
24 0x00000000                       //widgets[1].position.z
28 0xFFFFFFDC                       //widgets[1].parent: -36 byte offset to widgets[0]
2C \5Fra                            //*widgets[0].name
30 nk\0\3                           //*widgets[1].name
34 Bob\0

### #3Hodgman

Posted 08 September 2012 - 06:48 AM

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.

C++? I'd hope they'd instead use a less error-prone language like Ada
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.

If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had a maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.

For things that are truly variable size (levels etc)

IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines of
struct Widget
{
char*   name;
Vec3    position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
struct Widget
{
Offset<String> name;
Vec3           position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:
0  0x00000002                       //WidgetFile::widgets::count
4  0x00000028                       //widgets[0].name: 40 byte offset to {5,"Frank"}
8  0x00000000                       //widgets[0].position.x
C  0x00000000                       //widgets[0].position.y
10 0x00000000                       //widgets[0].position.z
14 0x00000000                       //widgets[0].parent: NULL
18 0x0000001B                       //widgets[1].name: 27 byte offset to {3,"Bob"}
1C 0x00000000                       //widgets[1].position.x
20 0x00000000                       //widgets[1].position.y
24 0x00000000                       //widgets[1].position.z
28 0xFFFFFFDC                       //widgets[1].parent: -36 byte offset to widgets[0]
2C \5Fra                            //*widgets[0].name
30 nk\0\3                           //*widgets[1].name
34 Bob\0

### #2Hodgman

Posted 08 September 2012 - 06:44 AM

Would you want the nuke heading towards your neighbours house programmed in java with garbage collection, or c++ with no external allocations? I know know which one I'd rather have heading towards my neighbours.

C++? I'd hope they'd instead use a less error-prone language like Ada
p.s. why are my neighbours being nuked? I'm pretty screwed if that happens.

If you need to use more than this, then you need to support streaming of level data on the fly (this is a whole other topic with similar concerns, guess what, you can use fixed size bank slots for this too!).

Yeah, on the last streaming platformer/adventure game I worked on, we allocated 3 big chunks of RAM that were given to physical areas of the game world. We'd always have two "chunks" of a level present, and a 3rd one being streamed in. Every level chunk therefore had a maximum memory limit, and the level designers would have to cut up the chunks (and design their line-of-sight blockers / chunk transition areas) so that this limit was respected.

For things that are truly variable size (levels etc)

IMHO, the level compiler tool should be able to determine the maximum required runtime size for a level, so when loading it, you can just malloc that much memory and steram the level data into it. Ideally, the level data would also be "in-place" serialized, so there's no "parsing"/"OnLoad" processing that needs to be done to it.

To allow large complex files to be loaded as a single large allocation, I use a bunch of custom classes to reimplement the basic C concepts of the pointer, array and string.
e.g. If you had a group of widgets to load, along the lines of
struct Widget
{
char*   name;
Vec3    position;
Widget* parent;
}
struct WidgetFile
{
int numWidgets;
Widget* widgets;
};
struct Widget
{
Offset<String> name;
Vec3           position;
Offset<Widget> parent;
}
struct WidgetFile
{
List<Widget> widgets;
};
And then the data compiler tool could spit out a file such as below, and I'd just be able to read the whole file in and cast it to a WidgetFile without parsing it or having to make a lot of small allocations:
2                                //WidgetFile::widgets::count
0x00000028                       //widgets[0].name: 40 byte offset to {5,"Frank"}
0x00000000 0x00000000 0x00000000 //widgets[0].position
0x00000000                       //widgets[0].parent: NULL
0x0000001B                       //widgets[1].name: 27 byte offset to {3,"Bob"}
0x00000000 0x00000000 0x00000000 //widgets[1].position
0xFFFFFFDC                       //widgets[1].parent: -36 byte offset to widgets[0]
\5Frank\0                        //widgets[0].name data
\3Bob\0                          //widgets[1].name data

PARTNERS