You mention that you can now achieve all of this with only two pipes:
- with one CRenderTexturePipe, the scene is rendered into an RGBA16f buffer for the color, and to an RGBA16f for the velocity/depth. Velocity can be used for motion blur. Depth is used to determine the focus distance in the depth of field effect.
- then one CEffectPipe takes the two textures that were rendered by the CRenderTexturePipe in input, binds a post-processing depth-of-field shader, and outputs that either to another texture, or to the main color buffer.
I'm curious though, what is the "glue" that plugs these two together in the many combinations necessary to create each effect? If it were nothing but these two pipe objects and each one had a fixed output, then obviously the result would be very generic. So where do the variations come from? For example, when rendering a depth-of-field effect, who determines that two render textures need to be created by CRenderTexturePipe and the appropriate depth-of-field shader applied in CEffectPipe?