I disagree. Every resource with a RAM cost too high should be divided, and that's a big difference.
1 - From what I've been reading every resource should be divided in fixed-size chunks and stored using a pool allocator. But how should the resources be divided? I need meshes and textures to be stored continuously so I can create GPU resources. The solution I found is to load the whole resource using a temporary allocator, create the GPU resource, store resource info in chunks, and clear the temporary allocator. But what if the resource info doesn't fit in a single chunk?
Strangely enough, I did have streaming support in the past. I don't have it now. Why? Because right now 2GiBs are becoming commonplace... on video cards. I once estimated I could load my whole game in RAM - all the levels - and it would still fit. So there was no chance to really tune the streaming methods. Latency on real world data is a different thing. I am surprised someone just waited for textures "to become visible" to load them - that would have been unacceptable for me even with async loading.
granted, the demand loading was mostly out of it being easier to implement (and works acceptably), even if it can't fully eliminate delays. I tried multi-threaded loading at one point, but then later found that some drivers (ATI/AMD) seem to break pretty hard with uploading via multiple threads (it was working ok with nVidia HW though), so I went back to this being single-threaded (well, along with some other parts of my engine proving to not really be sufficiently thread-safe in general).
one partial strategy used (for a fairly long time) is that many textures are split up into 2 resolution levels, with a lower resolution "base image", and higher resolution "alternate images". typically, the base image is loaded immediately, with the alternate images being delayed until there is time to load them (say, at first it loads a 128x128 version, followed later by a 512x512 or 1024x1024 version, and any normal/specular/luminance/... maps).
also, things may be pulled into memory and decoded at different times, with in some cases, data is pulled into RAM earlier, but not decoded and handed over to OpenGL or similar until it is referenced.
some of this is also related to my development of specialized codecs as well (though this is mostly more focused on things like video and similar).
video frames are typically only decoded when geometry using the texture is directly visible. this leads currently to an issue for I vs P frames, where it is a tradeoff of either wasting cycles decoding non-visible I-Frames, so that P-Frames can be decoded immediately if the video-texture comes back into view, or delaying decoding until an I-Frame has been seen.
as-is, it currently doesn't bother, leading potentially to a fraction of a second of garbled video when a texture first comes back into view.
Define what are those things. Streaming textures might make sense but in my system the draw calls, the binding commands themselves take very little. I would be very surprised if they exceeded 512KiB. Don't look for problems at all costs, there might be no problem at all. But if you want to go ahead, run the numbers and focus where it counts.
2 - Resources like materials are simply command groups (containing commands to bind textures to shader slots, etc), ideally they should be stored continuously in memory, but might not fit in a single chunk so how should I handle them?
I am not entirely sure what is meant by this.
in my case, materials are structures and most rendering is array-driven, and beyond this is mostly code directly making API calls.
Generic pools are nonsense in my opinion. Consider streaming sounds for example: guaranteed to be consumed at an almost perfectly regular rate. In general, it is better to have small incremental releases rather than one hiccup when the pool runs out, even if the combined work happens to be more performance intensive.
3 - Is there any way to know how many chunks the pool must be able to hold? Or simply wait until a chunk is freed when the pool is full?
FWIW, it is possible to get reasonably fast allocation/freeing without needing to force everything into using fixed-size chunks.
one strategy is mostly using free-lists of various sizes of fixed-size items, then when a given size item is allocated or freed, we first either check its appropriate free-list, or return an item to that specific list size.
on allocation, if the given list is empty, then we may resort to different strategies, such as carving up an item from a bigger free-list, or falling back to a more generic allocation strategy (a personal preference here being, as-noted, bitmaps and cells).
one example breakdown of such an allocator could be (based on object size):
0-4093 bytes: objects are in terms of 1-257 x 16-byte cells;
4096-65535 bytes: objects are in terms of 1-257 x 256-byte cells;
much past this point, it makes more sense to fall back to a different allocation strategy (such as falling back to malloc and/or other OS facilities).
so, after we know the size range for an object to allocate, we may scan the bitmaps for a free span of the needed size, generally with a rover to help speed up the search.
Divide as much as you need to exploit coherent, predictable behavior. Most out of core methods I've seen don't consider data to be all the same. They exploit locality of some sort in a data-specific way.
4 - Should I separate resource chunks and level object chunks in two pools?
main ones I think are either exploiting temporal access patterns (more generally), or spatial ones (such as for world data).
for example, for most world data, it is only really visible within a certain viewing radius.
it is fairly effective IME to simply drive the whole process effectively by distances.
some of this is mostly because unpacked voxel data can use fairly excessive amounts of RAM, and even RLE-packed still eats up a big chunk of RAM, and puts a bit of a strain on my memory-allocator performance as well (currently mostly for allocating/freeing buffers for visible chunks).
previously, audio data was showing up on memory dumps as using a lot of space (nowhere near voxel data though), so I ended up partly moving a lot of it to a customized audio codec (where it is generally kept compressed in RAM and decoded piecewise). currently, it isn't used nearly as much for on-disk storage though (though is used in conjunction with PAK files in a few cases, with a tool converting audio to the format and spitting out the PAK).
I might eventually do similar for textures (IOW: converting all the textures into a custom codec and putting them into a big PAK file or similar). probably the engine would read in the texture PAK files in advance (similar to the sound PAKs).