In my experience, tight memory management can be incredibly useful in certain situations, on certain platforms, and less necessary in others.
Keeping tight control of memory is particularly useful on things like consoles, devices with limited memory, and especially situations where there is no swapping to disk to page in 'extra memory' when you run out. In these situations, if you run out of memory, either your application handles it gracefully (says 'cannot do this' or whatever) or you get a crash.
Certain applications, like games and rocket control software, or plane autopilots, I see as more 'mission critical' so I don't want them to fail or crash under any circumstances. Whereas for e.g. a word processor, it is more acceptable when trying to load a document if it says 'cannot load document, not enough memory on this device' (although obviously you'd try and design to prevent this happening). But playing a game it's no good if it says 'cannot load level 5, out of memory', as you cannot progress in the game.
So for games that are anything other than very simple ones, myself I would tend to use a memory manager. However, for general applications / editors etc which have to adapt to what particular document / documents they are editing, where they are allowed to 'fail' due to out of memory errors, I'm much more likely to just use directly or indirectly the OS allocators.
If you do preallocate blocks of memory for each of your game 'modules', you are right in saying it is useful in advance to know how much memory to allocate. Preallocating blocks for different modules can be very useful when you need to work to a memory budget, particularly with a team of programmers, rather than just putting it all together and 'hoping it doesn't run out of memory'. For some areas, this will be easy to workout (e.g max number of sound buffers, things like that).
For others, particularly game level resources, the memory requirements may change from level to level. You may want certain levels to have more sound data than others, some more texture data, etc etc. However, a way around this, rather than having set limits for sound data / textures / geometry etc, is to have these data shared in a 'level file'. And have a certain maximum size for your level file data.
For tracking memory leaks, as the others say, just because you are using your own allocator it doesn't automatically 'fix' leaks. However, you should design your allocator so that along with the allocation it can store things like the line number and filename (in some type of debug build). Then on exit, you can report any allocations unfreed after cleanup, and other statistics, like the maximum memory used in each module etc.
You can also put 'check' regions of a few bytes around allocations, to detect when you have written outside of bounds, off the end of arrays etc.
There are also 3rd party systems you can use for most of this leak detection and bounds checking. Although these may not be available on your target platform .. so having your own can be very useful. It's the kind of thing you can write once and reuse in other projects, and great to have in your 'toolbox'.