Jump to content

  • Log In with Google      Sign In   
  • Create Account


Voxel engine load times


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 jellyfishchris   Members   -  Reputation: 300

Like
0Likes
Like

Posted 30 April 2013 - 06:14 AM

Hi,

Ive been making a voxel engine in my spare time. Currently I have it running at 60fps easy running the same amount of voxels you would see in a minecraft world. (125 chunks each chunk is 64 ^3 blocks.)

 

I am loading the voxels in as a flat array like so:

 

 

m_blocks.resize(CHUNK_SIZE);
for(int z = 0; z < CHUNK_SIZE; z++)
{
m_blocks[z].resize(CHUNK_SIZE);
for(int y = 0; y < CHUNK_SIZE; ++y)
{
m_blocks[z][y].resize(CHUNK_SIZE);
}
}
 

then using the terrain engine to create the desired layout of the blocks. Then putting this into an octree for optimizations and then clearing the original array. Also I am using a texture atlas.

 

My load time for all those blocks (32768000) is about 1minute in release mode. I was wondering if you guys have any ideas on how to boost up the load times. (I will eventually make the system multi-threaded when I get around to it)

 

Thanks

 

 



Sponsor:

#2 DrEvil   Members   -  Reputation: 1079

Like
0Likes
Like

Posted 30 April 2013 - 06:56 AM

Have you profiled what part of the loading pipeline is taking the most time?

#3 DrEvil   Members   -  Reputation: 1079

Like
0Likes
Like

Posted 30 April 2013 - 06:59 AM

Also, allocating vectors in that way isn't really a flat array, even if the resulting access syntax makes it look like it is. It's actually quite a bit of memory allocation. Doesn't mean that is necessarily where the load time is going though.

#4 Hodgman   Moderators   -  Reputation: 27590

Like
0Likes
Like

Posted 30 April 2013 - 07:45 AM

A completely flat array would be:

m_blocks.resize(CHUNK_SIZE*CHUNK_SIZE*CHUNK_SIZE);

 

What kind of data is a 'block'? If it's 4 bytes, then your ~32million blocks are 125MiB of data, which ideally, you should be able to load from disc in a few seconds.



#5 Nickie   Members   -  Reputation: 315

Like
0Likes
Like

Posted 30 April 2013 - 08:41 AM

Well I had the same issue.

I was loading everything at program startup. If I use vectors for the chunks/blocks it takes around 30 seconds to load everything. VS told me that I'm stuck at the allocations.

Moved to raw array and the result was fantastic. You could not even feel that its allocating ~150MB.

I was shocked then, now it seems quite normal.Allocating(searching in the heap) is not that heavy operation if you request single block of data. Searching multiple time is the problem.



#6 VladR   Members   -  Reputation: 722

Like
0Likes
Like

Posted 30 April 2013 - 02:10 PM

How can we possibly guess what does your code do if we don't see it ?

 

Multithreading will only make things worse, if the relevant process is not parallel in nature and there would be a lot of syncing.

 

 

Note, that the complexity here is not just n*n (square), but cubic (n*n*n).

 

Thus every seemingly small change will have an n*n*n impact !

 

 

If you can't be bothered doing profiling, at least comment out certain functions/processes  and see the impact those non-executed processes have on the total duration.


VladR    My 3rd person action RPG on GreenLight:    http://steamcommunity.com/sharedfiles/filedetails/?id=92951596

 


#7 jellyfishchris   Members   -  Reputation: 300

Like
0Likes
Like

Posted 30 April 2013 - 05:29 PM

A completely flat array would be:

 

m_blocks.resize(CHUNK_SIZE*CHUNK_SIZE*CHUNK_SIZE);

 

What kind of data is a 'block'? If it's 4 bytes, then your ~32million blocks are 125MiB of data, which ideally, you should be able to load from disc in a few seconds.

Would it be better to do it this approach? Also each block is 1byte.

 

If you can't be bothered doing profiling, at least comment out certain functions/processes and see the impact those non-executed processes have on the total duration.

I have done profilling and the thing that takes the longest time by a long shot is the memory allocation



#8 BGB   Crossbones+   -  Reputation: 1545

Like
1Likes
Like

Posted 30 April 2013 - 06:30 PM

A completely flat array would be:

 

m_blocks.resize(CHUNK_SIZE*CHUNK_SIZE*CHUNK_SIZE);

 

What kind of data is a 'block'? If it's 4 bytes, then your ~32million blocks are 125MiB of data, which ideally, you should be able to load from disc in a few seconds.

Would it be better to do it this approach? Also each block is 1byte.

 

 

>If you can't be bothered doing profiling, at least comment out certain functions/processes and see the impact those non-executed processes have on the total duration.

I have done profilling and the thing that takes the longest time by a long shot is the memory allocation

 

 

 

note that allocating large numbers of small objects is a bad case for typical memory managers.

 

it is generally much better to allocate a smaller number of larger objects in this case (IOW: the entire chunk as a single big array).

 

(FWIW: in my voxel engine, each block is actually 8 bytes... generally because it has a bit more per-block features...).



#9 jellyfishchris   Members   -  Reputation: 300

Like
0Likes
Like

Posted 30 April 2013 - 06:45 PM

Ok, Ill try this when I get home.

Thanks for the advice



#10 VladR   Members   -  Reputation: 722

Like
0Likes
Like

Posted 01 May 2013 - 11:57 AM

I must have missed that at first, but this warrants a separate reply - now it almost looks like you are making every single allocation (for every single voxel) manually - like in a separate call ?  Say, if you have 3,000,000 voxels, you actually do all those 3,000,000 allocations - one by one ?!?


VladR    My 3rd person action RPG on GreenLight:    http://steamcommunity.com/sharedfiles/filedetails/?id=92951596

 


#11 DrEvil   Members   -  Reputation: 1079

Like
0Likes
Like

Posted 01 May 2013 - 12:00 PM

Allocating in 1 big block is the best approach across the board. The only part that is a bit awkward is that in doing so you have to index the array differently. You can wrap the buffer in an object that hides these details though.

Cell[ x + y*width + z*width*depth ]




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS