I handle rendering and post-processing by different "scenes". In this context, a HUD is its own scene (this gives way to 3D HUDs without worrying about intersection with world objects) and the game world is its own scene.
This is not the same as grouping different meshes by different shaders, because a rendered scene output is a flat image that can get layered on top of other renders. Each scene has its own "render path" to take, allowing different processes, like forward rendering with deferred rendering. The render path also has optional PP steps.
The way it works is, first, I clear the screen. Then, for each scene, render it through its render path. Each "step" in the path is a self contained rendering class with common properties being it can take meshes, cameras or render targets as inputs. Classes may be reused for different scenes. PP steps usually only take render target inputs, and 3D rendering steps take cameras and meshes. But they all output render targets. At the last step, output the final render target, and continue with the next scene. Each render path shares a pool of render target textures.
These renders are layered back to front so I would render the HUD last, with its own rendering technique. As the layers have a transparent background, scenes with some PP effects can blend infront of others. For instance, if you wanted to make some kind of "glow" shader for the HUD with blur and light bloom, the glowing edges will be visible on top of the render of the game world.
I think it would be overkill to have many separate layers of stuff being rendered (and could hurt performance with many screen-sized quads) but I see it being useful for separating a few scenes as being considered tied to different game components, since the HUD and game world operate under different logic and input rules.