Refractive water

Started by
2 comments, last by Burnt_Fyr 9 years, 10 months ago

Hey,

I want to optimize my water rendering on the grounds of this: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html

However, I don't understand how the last step should look like when rendering the final scene. 3 ways came to my mind, but I don't know which one is the correct way to go. Could you give me some suggestions, please?

I

1. Render all to the S texture except for the water

2. Render water to the same S texture using (for refractions) the texture you are rendering to (is this even possible?)

3. Render texture on the orthogonal to camera plane

II

1. Render all to the S texture except for the water, alpha channel will contain what part of the water is seen

2. Render texture on the orthogonal to camera plane

3. Render water to the main back buffer (in front of the plane) using mask from alpha channel to clip what is not seen

III

1. Render all (without water) to the S texture an back buffer at once using MRT

2. Render water simply to the back buffer using the texture for refractions

Thanks!

Advertisement

First of all you render all non refractive meshes to a color buffer . As this article says you need an other buffer for determining wheter the position you are sampling from color buffer is in front of your refractive mesh or not.

So when rendering non refractive meshes you may want to use MRT and put them into the black and white buffer .

Then render your refractive meshes on the black and white buffer . refractive meshesh need to have black color.

Then you need to render to color buffer and read from the color buffer meanwhile . It's not possible but you can render the contents of the color buffer to another buffer(Let's name it color2).

So now render your refractive meshes onto the color buffer use the color2 and black and white buffer to fulfill your needs .

But for determining wheter the position you are sampling from color buffer is in front of your refractive mesh or not , you can also use a depth buffer if you have one .

I think you can also directly copy the contents of color buffer to color2 using the map functionalities in directx.

I think I will go with the MRT solution, because I will probably need this when doing soft edges feature, so it will be good to study this topic a bit. Although I have a probably funny question before I start. Does MRT work with back buffer as well? I mean if it's possible to render simultaneously to back buffer and some other render target or you can render simultaneously only to 2 other render targets where none of them is the main back buffer?

your hardware will dictate the number of simultaneous render targets available. You can bind your swapchain's backbuffer as a render target, and others as well, or you may wish to bind the back buffer on a separate pass, as you would in deferred rendering.

This topic is closed to new replies.

Advertisement