How To Achieve Real Time Self Reflection?

Started by
4 comments, last by BlazeCell 11 years, 8 months ago
I'm curious, are there techniques for achieving self reflections on a single piece of geometry in real time? For example, if I had a single mesh that was originally a tessellated plane that I have extruded pillars and spikes out of, how would I get the pillars and spikes to reflect one another?

From what I've seen the general way to achieve real time reflections is either to use a cube map to store the reflections in an omni-directional texture or to use the stencil buffer to create a portal of sorts into which the scene is rendered mirrored relative to the reflecting geometry. In both techniques, the reflecting geometry is not rendered when rendering the reflection.

I know that this can be achieved with ray tracing, but that is typically an offline rendering technique due to the heavy calculations involved. Are there any real time alternatives that do not use ray tracing? Or is ray tracing much cheaper to calculate than I was led to believe?

Any insight about this would be greatly appreciated.
Advertisement
Correct reflections are a really hard problem. I know of plane, cubemap and billboard reflections, but these will not be sufficient. Until now games use (more or less) fake reflections and it is often not necessary to have physical correct reflections to immerse onself in a game.

Ray tracing getting much faster the last years, but I'm still waiting to see a (blockbuster) game released which utilise traditional ray tracing .
Thanks for the comment Ashaman. That's what I figured. Good to at least know where current technology stands.
You can very easily do some raytracing on the rasterized image much like you would do parallax occlusion mapping or similar virtual displacement mapping methods. While it's not perfect (you aren't working with all the necessary information in a naive implementation) you can actually get very acceptable results in a lot of scenes and come up with a few quality hacks to fill out the edge cases smile.png

EDIT: More info here.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
For example, if I had a single mesh that was originally a tessellated plane that I have extruded pillars and spikes out of, how would I get the pillars and spikes to reflect one another?
With cube-mapping, you'd carefully cut up the mesh, and render a cube-map from inside each individual pillar.
is ray tracing much cheaper to calculate than I was led to believe?[/quote]Finding ways to mix ray-tracing and rasterization together is the cutting-edge at the moment.
One of the best example of this cutting-edge work is this video of Voxel Cone Tracing. They dynamically construct a voxelized version of the polygonal scene and inject surface light values into the voxels. Then to compute reflections, they use cone-tracing to step through the voxel structure to see where the reflected rays intersect. This is only possible on modern D3D11-class GPUs. So in a few years time, these types of techniques might start becoming mainstream.

With cube-mapping, you'd carefully cut up the mesh, and render a cube-map from inside each individual pillar.

Well then it's no longer a single piece of geometry. Never mind figuring out how to cut up something like that in real time taking into account the perspective of the camera at a particular frame; that sounds incredibly complicated.

Finding ways to mix ray-tracing and rasterization together is the cutting-edge at the moment.
One of the best example of this cutting-edge work is this video of Voxel Cone Tracing. They dynamically construct a voxelized version of the polygonal scene and inject surface light values into the voxels. Then to compute reflections, they use cone-tracing to step through the voxel structure to see where the reflected rays intersect.

This actually seems to be rather viable, I'd need to do more research into this technique, but upon viewing that video I'd suppose you could achieve self reflection with LOD based on the granularity of the voxels. Thanks for the tip.

This topic is closed to new replies.

Advertisement