Hello.
I'm trying to get into Vulkan. According to the spec it is bad practice to allocate GPU memory for each buffer/texture and that I should write my own memory allocator. Model data won't be a problem, but I use texture streaming which will definitely require lots and lots of allocations and deallocations very often, and performance is critical here. I'm thinking of allocating a fixed amount of memory here (say 256-1024MB depending on the texture quality setting) and do my own sub-allocations from there. I've written my own allocator implementation that uses the best-fit algorithm to find the smallest block that the allocation fits into and allocates it there. I'm correctly merging free'd blocks and it all seems to work correctly and is fast enough for my uses.
However, I am worried about fragmentation in the texture memory due to the random allocations and deallocations. I've written a test program which randomly allocates and deallocates "textures" (memory sizes of power-of-two-sized textures with mipmaps), and I quite often get allocation errors due to fragmentation. It seems that at worst 70 to 120 MBs of a 1024 MB heap can be unusable due to fragmentation in some cases after millions of allocations/frees and logging the allocation error with the most free memory left in the heap. That's quite a big amount of wastage IMO.
> Is there any way I can improve my allocator to minimize fragmentation in the case of texture streaming?
> Is there any good way of dealing with fragmentation? Force unload some in-use textures to move them in memory? Defragmenting the memory in real time? For example, would it be possible to try to consolidate holes in the heap by copying around textures? Attempt to copy small textures into holes without creating new ones, or even copying a texture to a "hold", then back to a better location?