Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualTiagoCosta

Posted 13 July 2013 - 05:59 AM

Thanks for your answers.
 

for example, I do most of my general-purpose memory allocation via a custom heap-style allocator, rather than using pool-based or region-based allocators.


Using an heap (free list) allocator don't you have fragmentation problems? After a while of loading/unloading of different sizes the memory will most likely become fragmented, how do you deal with it?
 
 

 

1 - From what I've been reading every resource should be divided in fixed-size chunks and stored using a pool allocator. But how should the resources be divided? I need meshes and textures to be stored continuously so I can create GPU resources. The solution I found is to load the whole resource using a temporary allocator, create the GPU resource, store resource info in chunks, and clear the temporary allocator. But what if the resource info doesn't fit in a single chunk?

I disagree. Every resource with a RAM cost too high should be divided, and that's a big difference.
Strangely enough, I did have streaming support in the past. I don't have it now. Why? Because right now 2GiBs are becoming commonplace... on video cards. I once estimated I could load my whole game in RAM - all the levels - and it would still fit. So there was no chance to really tune the streaming methods. Latency on real world data is a different thing.

 


I would rather have a more flexible (streaming) solution than have to limit levels size.
 

I am surprised someone just waited for textures "to become visible" to load them - that would have been unacceptable for me even with async loading.


I agree. My plan is to be able to define areas in the world editor where streaming of other zones should start.

 

 

For games with large levels (open world), that use streaming, is it "ok" to limit the number of resources loaded to a fixed number? Bitsquid (for example) limits the number of units to 65k, is it usual to put limits in an engine like that? How should the limit be calculated or are there more dynamic solutions?

 

Having a maximum number of resources/world objects allows me to use linear arrays increasing simplicity and most likely performance.

 

EDIT: After thinking a bit about it limiting the number of resources shouldn't cause any issue since I would need less than 16 MiB to store the info of 1 million resources (hashed name, reference count and pointer to resource data), and it is highly unlikely that I'll ever need that many resources in memory simultaneously happy.png .


#4TiagoCosta

Posted 13 July 2013 - 05:57 AM

Thanks for your answers.
 

for example, I do most of my general-purpose memory allocation via a custom heap-style allocator, rather than using pool-based or region-based allocators.


Using an heap (free list) allocator don't you have fragmentation problems? After a while of loading/unloading of different sizes the memory will most likely become fragmented, how do you deal with it?
 
 

 

1 - From what I've been reading every resource should be divided in fixed-size chunks and stored using a pool allocator. But how should the resources be divided? I need meshes and textures to be stored continuously so I can create GPU resources. The solution I found is to load the whole resource using a temporary allocator, create the GPU resource, store resource info in chunks, and clear the temporary allocator. But what if the resource info doesn't fit in a single chunk?

I disagree. Every resource with a RAM cost too high should be divided, and that's a big difference.
Strangely enough, I did have streaming support in the past. I don't have it now. Why? Because right now 2GiBs are becoming commonplace... on video cards. I once estimated I could load my whole game in RAM - all the levels - and it would still fit. So there was no chance to really tune the streaming methods. Latency on real world data is a different thing.

 


I would rather have a more flexible (streaming) solution than have to limit levels size.
 

I am surprised someone just waited for textures "to become visible" to load them - that would have been unacceptable for me even with async loading.


I agree. My plan is to be able to define areas in the world editor where streaming of other zones should start.

 

 

For games with large levels (open world), that use streaming, is it "ok" to limit the number of resources loaded to a fixed number? Bitsquid (for example) limits the number of units to 65k, is it usual to put limits in an engine like that? How should the limit be calculated or are there more dynamic solutions?

 

Having a maximum number of resources/world objects allows me to use linear arrays increasing simplicity and most likely performance.

 

EDIT: After thinking a bit about it limiting the number of resources shouldn't cause any issue since I would need less than 16 MiB to store the info of 1 million resources (hashed name, reference count and pointer to resource data).


#3TiagoCosta

Posted 13 July 2013 - 05:54 AM

Thanks for your answers.
 

for example, I do most of my general-purpose memory allocation via a custom heap-style allocator, rather than using pool-based or region-based allocators.


Using an heap (free list) allocator don't you have fragmentation problems? After a while of loading/unloading of different sizes the memory will most likely become fragmented, how do you deal with it?
 
 

 

1 - From what I've been reading every resource should be divided in fixed-size chunks and stored using a pool allocator. But how should the resources be divided? I need meshes and textures to be stored continuously so I can create GPU resources. The solution I found is to load the whole resource using a temporary allocator, create the GPU resource, store resource info in chunks, and clear the temporary allocator. But what if the resource info doesn't fit in a single chunk?

I disagree. Every resource with a RAM cost too high should be divided, and that's a big difference.
Strangely enough, I did have streaming support in the past. I don't have it now. Why? Because right now 2GiBs are becoming commonplace... on video cards. I once estimated I could load my whole game in RAM - all the levels - and it would still fit. So there was no chance to really tune the streaming methods. Latency on real world data is a different thing.

 


I would rather have a more flexible (streaming) solution than have to limit levels size.
 

I am surprised someone just waited for textures "to become visible" to load them - that would have been unacceptable for me even with async loading.


I agree. My plan is to be able to define areas in the world editor where streaming of other zones should start.

 

 

For games with large levels (open world), that use streaming, is it "ok" to limit the number of resources loaded to a fixed number? Bitsquid (for example) limits the number of units to 65k, is it usual to put limits in an engine like that? How should the limit be calculated or are there more dynamic solutions?

 

Having a maximum number of resources/world objects allows me to use linear arrays increasing simplicity and most likely performance.

 

EDIT: After thinking a bit about it limiting the number of resources shouldn't cause any issue since I would need less than 4 MiB to store the info of 1 million resources (hashed name, reference count and pointer to resource data).


#2TiagoCosta

Posted 13 July 2013 - 05:54 AM

Thanks for your answers.
 

for example, I do most of my general-purpose memory allocation via a custom heap-style allocator, rather than using pool-based or region-based allocators.


Using an heap (free list) allocator don't you have fragmentation problems? After a while of loading/unloading of different sizes the memory will most likely become fragmented, how do you deal with it?
 
 

 

1 - From what I've been reading every resource should be divided in fixed-size chunks and stored using a pool allocator. But how should the resources be divided? I need meshes and textures to be stored continuously so I can create GPU resources. The solution I found is to load the whole resource using a temporary allocator, create the GPU resource, store resource info in chunks, and clear the temporary allocator. But what if the resource info doesn't fit in a single chunk?

I disagree. Every resource with a RAM cost too high should be divided, and that's a big difference.
Strangely enough, I did have streaming support in the past. I don't have it now. Why? Because right now 2GiBs are becoming commonplace... on video cards. I once estimated I could load my whole game in RAM - all the levels - and it would still fit. So there was no chance to really tune the streaming methods. Latency on real world data is a different thing.

 


I would rather have a more flexible (streaming) solution than have to limit levels size.
 

I am surprised someone just waited for textures "to become visible" to load them - that would have been unacceptable for me even with async loading.


I agree. My plan is to be able to define areas in the world editor where streaming of other zones should start.

 

 

For games with large levels (open world), that use streaming, is it "ok" to limit the number of resources loaded to a fixed number? Bitsquid (for example) limits the number of units to 65k, is it usual to put limits in an engine like that? How should the limit be calculated or are there more dynamic solutions?

 

Having a maximum number of resources/world objects allows me to use linear arrays increasing simplicity and most likely performance.

 

EDIT: After thinking a bit about it limiting the number of resources shouldn't cause any issue since I would need less than 4 MiB to the info of 1 million resources (hashed name, reference count and pointer to resource data).


#1TiagoCosta

Posted 13 July 2013 - 05:33 AM

Thanks for your answers.
 

for example, I do most of my general-purpose memory allocation via a custom heap-style allocator, rather than using pool-based or region-based allocators.


Using an heap (free list) allocator don't you have fragmentation problems? After a while of loading/unloading of different sizes the memory will most likely become fragmented, how do you deal with it?
 
 

 

1 - From what I've been reading every resource should be divided in fixed-size chunks and stored using a pool allocator. But how should the resources be divided? I need meshes and textures to be stored continuously so I can create GPU resources. The solution I found is to load the whole resource using a temporary allocator, create the GPU resource, store resource info in chunks, and clear the temporary allocator. But what if the resource info doesn't fit in a single chunk?

I disagree. Every resource with a RAM cost too high should be divided, and that's a big difference.
Strangely enough, I did have streaming support in the past. I don't have it now. Why? Because right now 2GiBs are becoming commonplace... on video cards. I once estimated I could load my whole game in RAM - all the levels - and it would still fit. So there was no chance to really tune the streaming methods. Latency on real world data is a different thing.

 


I would rather have a more flexible (streaming) solution than have to limit levels size.
 

I am surprised someone just waited for textures "to become visible" to load them - that would have been unacceptable for me even with async loading.


I agree. My plan is to be able to define areas in the world editor where streaming of other zones should start.

 

 

For games with large levels (open world), that use streaming, is it "ok" to limit the number of resources loaded to a fixed number? Bitsquid (for example) limits the number of units to 65k, is it usual to put limits in an engine like that? How should the limit be calculated or are there more dynamic solutions?

 

Having a maximum number of resources/world objects allows me to use linear arrays increasing simplicity and most likely performance.


PARTNERS