I don't think things are quite so rigidly defined.
for example, I do most of my general-purpose memory allocation via a custom heap-style allocator, rather than using pool-based or region-based allocators.
typically, my loading process is also driven more by "when things come into view". like, the first time a texture is seen, or a model is drawn, or a sound is played, is when the loader loads it. one option is to make it partly asynchronous, where things to be loaded are put in a queue and a certain amount of time is allowed each frame for loading. this may introduce a slight delay between something coming into view and being fully loaded, but helps reduce the effect of long obvious loading-stalls (say as the engine tries to load it all at once).
typically, unloading works either based on distance (for world contents), or via a "stuff falls off the end of a list" strategy (for other things). for example, when a resource is accessed, it is potentially moved to the front of its respective list. if a resource hasn't been accessed recently, reaching the far end of the list, it may be safe to unload it.
some care regarding data formats may also make sense.
avoiding formats which are costly to parse or which require significant up-front processing, and probably make the engine complain about particularly slow/ugly cases (for example: textures with bad resolutions or parameters, ill-advised file-formats, ...).
arguably, this may not necessarily be the "ideal" set of solutions, but seems to "basically work".
granted, some data in my case does rely on bulk loading archive-like files. for example, to some extent my script-VM does this (using a WAD variant), and a few other parts of my engine use PAK files, ...
some other things (such as my voxel terrain), load "regions" basically as a single big glob of memory, and then decodes/encodes individual chunks on demand, and then periodically dumps the whole region image back out to disk. internally, it works similar to my memory manager, basically dividing the memory into small fixed-size cells and using a cell-bitmap to allocate and free space (as spans of cells).
pretty much any data tied to a specific part of the world then goes into specific region images (while this is mostly voxel chunks at present, it can also potentially include brush and mesh geometry, as well as entities, ...). I am also left realizing things I probably could have done differently... (made regions be perfect cubes, probably handle chunks and voxels differently, ...).
a lot may depend on what makes the most sense for the specific data and use-patterns in question as well.
(granted, I am not entirely sure what the question is asking about, FWIW...).
add, if relevant:
voxel-regions are actually more based on spatial size than memory size, and their underlying memory may be resized as-needed (generally avoided when possible).
in my case, regions are 512x512x128 meters, and chunks are 16x16x16 meters (in retrospect, I would have probably preferred 256x256x256 meter regions, but changing this would break my existing world...).
generally, voxels can't cross chunk or region boundaries by nature, so it does not matter for them.
entities or things like brushes or meshes are simply put into whichever chunk their origin happens to fall inside.
the general assumption is that things will typically be small relative to the size of the chunk or region, and typically the chunk will be loaded before whatever is inside it becomes relevant (even if it hangs out over the edge).
loading/unloading/... is mostly then driven by "how far away is it?...". say, the chunk exceeds 384 meters from the camera, or the region exceeds 768 meters, and then is unloaded. when a chunk or region is unloaded, so too is everything it contains. likewise, if a chunk is less than 192 meters away, it will be forcefully loaded (with 256 as an "optimal view distance", and providing a little slack-space between when things are more aggressively loaded or unloaded).