Moving Large Array of Vertices Into GPU Memory

Started by
2 comments, last by AverageJoeSSU 14 years, 7 months ago
With all the different attempts at GPU raytracing going on right now, I am curious what techniques are being used to move the lists of triangles into the GPU memory. It would seem like one of obvious things to do is encode the triangles into a texture and then send that like normal, however how would one sample such a texture many many times? Even using a kd-tree I'd just guess of the top of my head any given ray might need to be compared to several hundred triangles. That being the case just putting the shader into a loop ray-triangle intersection testing isn't likely going to work very well? Anyone have any good resources which outline how this step of the process is handled?
Advertisement
I think for moving large number of Vertices you can use Vertex Buffer objects.

http://www.opengl.org/wiki/Vertex_Buffer_Objects
Quote:Original post by richardmonette
It would seem like one of obvious things to do is encode the triangles into a texture and then send that like normal, however how would one sample such a texture many many times? Even using a kd-tree I'd just guess of the top of my head any given ray might need to be compared to several hundred triangles.

When it comes to GPGPU stuff like raytracing, pretty much all the data comes in as textures, yes. As for how you sample the texture, if you're actually writing shaders, you do it with dependent texture reads. That's a really terrible way to do GPGPU, though, and most people don't do it any more. Instead they use CUDA and (to a much lesser extent) CTM, which allows them to represent the computation more natively.
This is one of the beefs i have with PhysX right now... you can do fluid, but getting the info out of the physics sim is hardly optimal since there is no way to reference it in GPU memory to my knowledge.

------------------------------

redwoodpixel.com

This topic is closed to new replies.

Advertisement