Jump to content

  • Log In with Google      Sign In   
  • Create Account


How To Achieve Real Time Self Reflection?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 BlazeCell   Members   -  Reputation: 451

Like
0Likes
Like

Posted 06 August 2012 - 12:50 PM

I'm curious, are there techniques for achieving self reflections on a single piece of geometry in real time? For example, if I had a single mesh that was originally a tessellated plane that I have extruded pillars and spikes out of, how would I get the pillars and spikes to reflect one another?

From what I've seen the general way to achieve real time reflections is either to use a cube map to store the reflections in an omni-directional texture or to use the stencil buffer to create a portal of sorts into which the scene is rendered mirrored relative to the reflecting geometry. In both techniques, the reflecting geometry is not rendered when rendering the reflection.

I know that this can be achieved with ray tracing, but that is typically an offline rendering technique due to the heavy calculations involved. Are there any real time alternatives that do not use ray tracing? Or is ray tracing much cheaper to calculate than I was led to believe?

Any insight about this would be greatly appreciated.

Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 6871

Like
4Likes
Like

Posted 06 August 2012 - 11:46 PM

Correct reflections are a really hard problem. I know of plane, cubemap and billboard reflections, but these will not be sufficient. Until now games use (more or less) fake reflections and it is often not necessary to have physical correct reflections to immerse onself in a game.

Ray tracing getting much faster the last years, but I'm still waiting to see a (blockbuster) game released which utilise traditional ray tracing .

Edited by Ashaman73, 06 August 2012 - 11:48 PM.


#3 BlazeCell   Members   -  Reputation: 451

Like
0Likes
Like

Posted 07 August 2012 - 02:35 PM

Thanks for the comment Ashaman. That's what I figured. Good to at least know where current technology stands.

#4 InvalidPointer   Members   -  Reputation: 1398

Like
0Likes
Like

Posted 08 August 2012 - 04:59 AM

You can very easily do some raytracing on the rasterized image much like you would do parallax occlusion mapping or similar virtual displacement mapping methods. While it's not perfect (you aren't working with all the necessary information in a naive implementation) you can actually get very acceptable results in a lot of scenes and come up with a few quality hacks to fill out the edge cases Posted Image

EDIT: More info here.

Edited by InvalidPointer, 08 August 2012 - 05:01 AM.

clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#5 Hodgman   Moderators   -  Reputation: 28438

Like
2Likes
Like

Posted 08 August 2012 - 05:14 AM

For example, if I had a single mesh that was originally a tessellated plane that I have extruded pillars and spikes out of, how would I get the pillars and spikes to reflect one another?

With cube-mapping, you'd carefully cut up the mesh, and render a cube-map from inside each individual pillar.

is ray tracing much cheaper to calculate than I was led to believe?

Finding ways to mix ray-tracing and rasterization together is the cutting-edge at the moment.
One of the best example of this cutting-edge work is this video of Voxel Cone Tracing. They dynamically construct a voxelized version of the polygonal scene and inject surface light values into the voxels. Then to compute reflections, they use cone-tracing to step through the voxel structure to see where the reflected rays intersect. This is only possible on modern D3D11-class GPUs. So in a few years time, these types of techniques might start becoming mainstream.

#6 BlazeCell   Members   -  Reputation: 451

Like
0Likes
Like

Posted 09 August 2012 - 04:13 AM

With cube-mapping, you'd carefully cut up the mesh, and render a cube-map from inside each individual pillar.

Well then it's no longer a single piece of geometry. Never mind figuring out how to cut up something like that in real time taking into account the perspective of the camera at a particular frame; that sounds incredibly complicated.

Finding ways to mix ray-tracing and rasterization together is the cutting-edge at the moment.
One of the best example of this cutting-edge work is this video of Voxel Cone Tracing. They dynamically construct a voxelized version of the polygonal scene and inject surface light values into the voxels. Then to compute reflections, they use cone-tracing to step through the voxel structure to see where the reflected rays intersect.

This actually seems to be rather viable, I'd need to do more research into this technique, but upon viewing that video I'd suppose you could achieve self reflection with LOD based on the granularity of the voxels. Thanks for the tip.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS