• ### Popular Now

• 9
• 13
• 9
• 18
• 19

This topic is 1784 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi,

Ive been making a voxel engine in my spare time. Currently I have it running at 60fps easy running the same amount of voxels you would see in a minecraft world. (125 chunks each chunk is 64 ^3 blocks.)

I am loading the voxels in as a flat array like so:

m_blocks.resize(CHUNK_SIZE);
for(int z = 0; z < CHUNK_SIZE; z++)
{
m_blocks[z].resize(CHUNK_SIZE);
for(int y = 0; y < CHUNK_SIZE; ++y)
{
m_blocks[z][y].resize(CHUNK_SIZE);
}
}


then using the terrain engine to create the desired layout of the blocks. Then putting this into an octree for optimizations and then clearing the original array. Also I am using a texture atlas.

My load time for all those blocks (32768000) is about 1minute in release mode. I was wondering if you guys have any ideas on how to boost up the load times. (I will eventually make the system multi-threaded when I get around to it)

Thanks

##### Share on other sites
Have you profiled what part of the loading pipeline is taking the most time?

##### Share on other sites
Also, allocating vectors in that way isn't really a flat array, even if the resulting access syntax makes it look like it is. It's actually quite a bit of memory allocation. Doesn't mean that is necessarily where the load time is going though.

##### Share on other sites

A completely flat array would be:

m_blocks.resize(CHUNK_SIZE*CHUNK_SIZE*CHUNK_SIZE);

What kind of data is a 'block'? If it's 4 bytes, then your ~32million blocks are 125MiB of data, which ideally, you should be able to load from disc in a few seconds.

##### Share on other sites

Well I had the same issue.

I was loading everything at program startup. If I use vectors for the chunks/blocks it takes around 30 seconds to load everything. VS told me that I'm stuck at the allocations.

Moved to raw array and the result was fantastic. You could not even feel that its allocating ~150MB.

I was shocked then, now it seems quite normal.Allocating(searching in the heap) is not that heavy operation if you request single block of data. Searching multiple time is the problem.

##### Share on other sites

How can we possibly guess what does your code do if we don't see it ?

Multithreading will only make things worse, if the relevant process is not parallel in nature and there would be a lot of syncing.

Note, that the complexity here is not just n*n (square), but cubic (n*n*n).

Thus every seemingly small change will have an n*n*n impact !

If you can't be bothered doing profiling, at least comment out certain functions/processes  and see the impact those non-executed processes have on the total duration.

##### Share on other sites

A completely flat array would be:

m_blocks.resize(CHUNK_SIZE*CHUNK_SIZE*CHUNK_SIZE);

What kind of data is a 'block'? If it's 4 bytes, then your ~32million blocks are 125MiB of data, which ideally, you should be able to load from disc in a few seconds.

Would it be better to do it this approach? Also each block is 1byte.

If you can't be bothered doing profiling, at least comment out certain functions/processes and see the impact those non-executed processes have on the total duration.

I have done profilling and the thing that takes the longest time by a long shot is the memory allocation

##### Share on other sites

A completely flat array would be:

m_blocks.resize(CHUNK_SIZE*CHUNK_SIZE*CHUNK_SIZE);

What kind of data is a 'block'? If it's 4 bytes, then your ~32million blocks are 125MiB of data, which ideally, you should be able to load from disc in a few seconds.

Would it be better to do it this approach? Also each block is 1byte.

>If you can't be bothered doing profiling, at least comment out certain functions/processes and see the impact those non-executed processes have on the total duration.

I have done profilling and the thing that takes the longest time by a long shot is the memory allocation

note that allocating large numbers of small objects is a bad case for typical memory managers.

it is generally much better to allocate a smaller number of larger objects in this case (IOW: the entire chunk as a single big array).

(FWIW: in my voxel engine, each block is actually 8 bytes... generally because it has a bit more per-block features...).

##### Share on other sites

Ok, Ill try this when I get home.