Hi guys! Again...
So I'm trying to follow an article/tutorial on volume rendering from GraphicsRunner (Great!) [ http://graphicsrunner.blogspot.dk/2009/01/volume-rendering-101.html ].
But there is something which I don't understand, which is the following:
We could always calculate the intersection of the ray from the eye to the current pixel position with the cube by performing a ray-cube intersection in the shader. But a better and faster way to do this is to render the positions of the front and back facing triangles of the cube to textures. This easily gives us the starting and end positions of the ray, and in the shader we simply sample the textures to find the sampling ray.
So I understand that I have to render the back and front culled positions to individual textures, which I am doing and it works (It looks fine), but does he mean render the depth model into view + projected space, or to render it as a texture that can be sampled onto the cube with the cubes respective texture coordinates?
Thanks, as always.