Currently most of my techniques are done in the screen space, where I do the following:
1) Render above-water reflection map to texture
2) Render below-water refraction map to texture
3) Draw the water mesh as a surface, sampling from both textures to produce effects
Now I'd like to be able to implement screen space effects (fog, caustics) when I'm below the water surface as well, which seems doable by just rendering a full-screen aligned quad in front of the camera, and then pulling texels from my refraction map to perform the post-processing.
So I've covered the case pretty well for above and below the water, but I'm thinking now of techniques for how I can do the transition. If I just use a boolean flag for "above/below" surface this will not work when I'm in transition, and also this may be difficult to calculate because the water surface is not flat and moves up and down based on vertex shader calculation (for waves). Thus when the camera dips just below the surface, I Will see the floor of the body of water as if it was empty/dry, with no screen space effects applied, which will look quite wrong.
I've had two thoughts for how to handle this, neither of which is making me very happy, and I wanted to see if anyone else can come up with any neat ideas.
1) Add to a stencil buffer when drawing the refraction map. This will put a value in the stencil map everywhere that could be considered 'underwater'. Then when rendering draw a screen aligned quad, only drawing the pixels that have not already had the surface drawn on top of them. This should allow me to draw half the screen above water and half the screen below the water.
My problems with this are that I believe it requires me to have my render-to-texture refraction map as the same resolution as the display, which I wanted to avoid, because there's no way to stretch a stencil buffer (can't stretch a 512x512 stencil buffer to a 1920x1200 resolution screen). Also this adds about a million stencil tests per frame even when we are totally above the water, though this could be prevented most of the time by some kind of "im definitely nowhere near the water surface" flag.
2) Second idea, to try to generate some kind of simplified mesh representing the entire volume of the body of water. I could then do a plane/object intersection (where plane is the near plane), to find out where my camera intersects the surface, and then just render this area of intersection as an object instead of a fullscreen quad, possibly using stencil here to mask out the pixels that have already had the wave surface drawn over them.
I think this could work, but this adds a whole mess of extra geometry work that I didn't really want to deal with, and was hoping I wouldn't have to. I was hoping to make this shader work with mainly screen space techniques, such as to be independent of the geometry of the level it was used in. This throws a big wrench into that, and now I have to be prepared for all kinds of weird geometry situations. What if my pool was a sphere? An inverse-upsidedown trapezoid?
Basically just trying to figure out how to just postprocess this part of an image in the red box:
[attachment=1461:region.jpg]
Also including a little video of what I've got so far, just for some inspiration:
[media]
[/media]
YouTube link in case the embed isn't working
Thanks for reading!