Generating layered distance maps

Started by
-1 comments, last by ramirofages 7 years, 8 months ago

Hi guys, this is my first time posting in the forum. I wanted to ask something about this technique that I'm trying to implement:

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch17.html

The overall effect that I'm trying to achieve is to have an object have reflections of itself ( as well as the surrounding environment). Example:

[attachment=33037:Screenshot_3.png]

Now back to the article, in section 17.2.1 they tell you how to generate the layered distance maps that later will be used for raycasting.

Computing distance maps is similar to computing classical environment maps. The difference is that not only is the color calculated and stored in additional cube maps, but also the distance and the normal vector

Then they tell you that to compute each layer, you could use depth peeling, but then they say that you could do other things:

For simpler reflective objects that are not too close to the environment, the layer generation can be further simplified. The reference point is put at the center of the reflective object, and the front and back faces of the reflective object are rendered separately into two layers.

But I still don't quite get how to do that, so I will now put a list of the things that ( I think) I have to do to accomplish that:

1) Put a shader with environment map on my object, to have it reflect the environment

2) Put the camera inside that object, and render a cubemap with front faces, and other cubemap with depth and normal vector of the objects in the environment

3) Render another cubemap with back faces, and other cubemap with depth and normal vectors.

Is that correct?

Cheers

This topic is closed to new replies.

Advertisement