Sign in to follow this  
richardmonette

Moving Large Array of Vertices Into GPU Memory

Recommended Posts

With all the different attempts at GPU raytracing going on right now, I am curious what techniques are being used to move the lists of triangles into the GPU memory. It would seem like one of obvious things to do is encode the triangles into a texture and then send that like normal, however how would one sample such a texture many many times? Even using a kd-tree I'd just guess of the top of my head any given ray might need to be compared to several hundred triangles. That being the case just putting the shader into a loop ray-triangle intersection testing isn't likely going to work very well? Anyone have any good resources which outline how this step of the process is handled?

Share this post


Link to post
Share on other sites
Quote:
Original post by richardmonette
It would seem like one of obvious things to do is encode the triangles into a texture and then send that like normal, however how would one sample such a texture many many times? Even using a kd-tree I'd just guess of the top of my head any given ray might need to be compared to several hundred triangles.

When it comes to GPGPU stuff like raytracing, pretty much all the data comes in as textures, yes. As for how you sample the texture, if you're actually writing shaders, you do it with dependent texture reads. That's a really terrible way to do GPGPU, though, and most people don't do it any more. Instead they use CUDA and (to a much lesser extent) CTM, which allows them to represent the computation more natively.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this