Yes it can.
Incidentally, if I have a shader technique with more than one pass, do I always have to feed the second one with the rendered results of the first? I'd have thought the underlying libraries would know that by default and somehow do it automatically. I use 2 passes with my Gaussian blur shader, horizontal and vertical, and my application code has to feed the results of the horizontal one into the vertical one. I guess the underlying system can't always know what we're trying to achieve....
This is where you implement a node-based pluggable system where each node defines its named inputs (with system-provided “ColorBuffer” and “DepthBuffer”, maybe also “NormalBuffer” if you need, etc.) and named outputs.
An input name that is an empty string means “use the output from the previous stage if there was a previous stage, and otherwise use ColorBuffer”.
Basically you can chain together a bunch of post-processing nodes and name their inputs and outputs.
The nodes also define what resources they will need for their work, specifically what texture dimensions (in raw sizes or as a percent of the screen size) and formats they will use (these are named and optionally used as outputs).
The system pre-processes the nodes and can determine the minimum amount of resources to allocate based on how frequently resources can be reused between the post-processing phases etc.
During run-time the system runs over the nodes in order, putting their necessary resources into the necessary texture slots/framebuffer slots and executes the node’s virtual Process() function which sets its shader, sets data in the shader as per its needs, and does its job.
The system seems complex at first, but it is actually easier than performing a heart transplant on a rampaging elephant.