An idea for rendering geometry

Started by
11 comments, last by blurmonk 11 years, 7 months ago
So from what I understand, It might not be too bad for up close objects, but the need for more detail never stops growing with distance.
So at some distance it would become impossible to render something correctly without more information per pixel.
You could try something like taking a number of samples based on the distance.
But I'm starting to see why there's no way to actually make an algorithm that takes the same time regardless of view and amount of objects. Just too good to be true >.<
Advertisement
Well, if the geometry were completely static, you could bake the geometry into a buffer containing as many prerendered hash table images (images containing all the geometry needed for each pixel) as possible, rendered from as many directions on the hemisphere as possible (orthographic projection). You could than use this buffer to render the geometry from any point with any view direction in constant time (as long as the hash tables only contain 1 geometry intersection per bucket).

You'd still only have one sample per pixel though :/

Also this could only work in theory. The buffer would probably be multiple Terabytes to support rendering HD images in constant time.
I think that that is what Reyes renderer do in general. Micropolygon tesselation basicaly?

http://en.wikipedia.org/wiki/Reyes_rendering

This topic is closed to new replies.

Advertisement