The NVIDIA fire demo (http://http.developer.nvidia.com/GPUGems/gpugems_ch06.html) uses the following technique:
We rendered a particle system of "heat" particles in a texture target. During the final compositing, we simply used the (red, green) values of each "heat render target" pixel as a (u, v) per-pixel 2D texture coordinates displacement during the texel fetch of the "rendered scene" texture target.
What I am wondering, is why do you think they render into a separate "heat render target". I have a forward pass that renders refractive materials like glass, and it seems to me I can render the "heat" particle system during this pass.