Sign in to follow this  
mikeman

Better local reflections with Environment Mapping

Recommended Posts

I've recently began playing with environment mapping using cubemaps, like the Source Engine. Although the results are acceptable, it is known that local reflections are not handled too well with this. I believe one of the reasons is that with cubemapping we can only use the reflection vector as direction. In real life, the reflection vector is cast from a point in space, but in cubemapping it always starts at (0,0,0). That's why cubemapping works better when the objects reflect something that is in great distance(a skybox for example), and not so great when we're dealing with a room. I think that things would improve if we "altered" the reflection vector in order to take into account the world-space position of the reflective surface. Let's say we build a cubemap inside a room, with the camera placed in the center(Cw the center in world coordinates). The cubemap is supposed also to have a world-space size, which in this case let's say is the size of the room. Inside the fragment shader, we need to pass the world coordinate(Vw) of the fragment. We calculate the distance from the center of the room to the fragment(Dw=Vw-Cw). We normalize Dw, dividing it by the theoretical size of the cubemap. Then we alter the reflection vector: Rw=Rw+Dw. It's not perfect, and I haven't mathematical proof of that, but my instict tells me it's a pretty good approach. Has something like this been done before? What do you think?

Share this post


Link to post
Share on other sites
The main problem with any reflection technique in a rasterized engine is that you can't do "proper" reflections on non-flat surfaces. You can do some distortions to make it look good, but the correct solution is to generate one reflection image for each individual reflection vector - something that rasterizers are simply not suited to do. These are cases where a raytracer will produce better looking results, but of course that's not really a viable option at the moment.

I'm not really up to speed on the latest reflection methods, but it seems to me that the best shot at a good reflection would look something like this:

1) Reflect camera point and look/up vectors across the reflection plane
2) Render from this point
3) Collapse to cubemap and render onto the reflective surface

The trick is generating the reflection plane. You can probably do this by sampling the surface normal at several places on the reflective surface (likely in preprocessing) and average them out, then weight it by the view direction to create the reflection plane. You should then be able to do some trickery with normal mapping where you offset the texture coordinates in the cubemap using the normal to generate "blobby" reflective surfaces.

Share this post


Link to post
Share on other sites
Pragma, thank you so much, this is exactly what I want. The only difference from what I described is that they do a ray-sphere collision test in the fragment shader, which is much more accurate than the hacked method I described for altering the reflection vector. I'll definately put it in my engine.

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman
I've recently began playing with environment mapping using cubemaps, like the Source Engine. Although the results are acceptable, it is known that local reflections are not handled too well with this. I believe one of the reasons is that with cubemapping we can only use the reflection vector as direction. In real life, the reflection vector is cast from a point in space, but in cubemapping it always starts at (0,0,0). That's why cubemapping works better when the objects reflect something that is in great distance(a skybox for example), and not so great when we're dealing with a room.

I think that things would improve if we "altered" the reflection vector in order to take into account the world-space position of the reflective surface. Let's say we build a cubemap inside a room, with the camera placed in the center(Cw the center in world coordinates). The cubemap is supposed also to have a world-space size, which in this case let's say is the size of the room. Inside the fragment shader, we need to pass the world coordinate(Vw) of the fragment. We calculate the distance from the center of the room to the fragment(Dw=Vw-Cw). We normalize Dw, dividing it by the theoretical size of the cubemap. Then we alter the reflection vector: Rw=Rw+Dw. It's not perfect, and I haven't mathematical proof of that, but my instict tells me it's a pretty good approach.

Has something like this been done before? What do you think?


You are correct, cubemaps assume the ray originates at the cube's center. Chris Brennan wrote an article for ShaderX that describes a method for reflecting from points other than the cube's center. That chapter is available on ATI's developer website, here's a link:

http://www.ati.com/developer/shaderx/ShaderX_CubeEnvironmentMapCorrection.pdf

--Chris

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this