Hi,
can someone point me to some literature about how to render to a 3d texture from within a shader?
I know how to do this with a fbo and then writing the result of a render pass into a 2d texture, but how could this work for 3d? The render result is always projected onto a plane, right?
What I want to do as a first experiment is to render some spheres (just given a point and a radius) into a 3d texture that has values of 0 everywhere except in the interior of the spheres.
Any help is appreciated!
Render to 3d texture with glsl?
When you render with shaders you are rendering a viewport... every pixel rendered is part of that viewport. So rendering volume into a 3d texture.. doesnt make sense... your producing a 2d image.
What you can try to do is have each layer bound as a different output and put the correct pixel into each one accordingly. Either that or use geometry shaders to render several views at once (for each slice through the objects).
Frankly you'd be better off using a voxel grid and densities and later converting it to a texture.
What you can try to do is have each layer bound as a different output and put the correct pixel into each one accordingly. Either that or use geometry shaders to render several views at once (for each slice through the objects).
Frankly you'd be better off using a voxel grid and densities and later converting it to a texture.
Hi,
there is a chapter about that topic in the book
Real-Time Volume Graphics (by Engel, Hadwiger, etc.).
It's an example of converting a polygonal mesh into
a 3d texture.
regards,
c-mos
there is a chapter about that topic in the book
Real-Time Volume Graphics (by Engel, Hadwiger, etc.).
It's an example of converting a polygonal mesh into
a 3d texture.
regards,
c-mos
I think the example code at NVIDIA is what you are after:
http://developer.download.nvidia.com/SDK/10/opengl/samples.html
See the sample called "Render to 3D Texture"
http://developer.download.nvidia.com/SDK/10/opengl/samples.html
See the sample called "Render to 3D Texture"
@Simon_Roth: This was exactly what I was thinking...
@ruysch: This looks pretty much exactly as what I was looking for, thank you!
The more I think about it, though, the more unsure am I about whether this is the right approach. My data is largely sparse and I only need rendering of the frontmost isosurface.
What I want to do (in the end) is to transfer a lot of the computation that happens now on the CPU and requires a lot of texture data transfers to the GPU.
Right now, I simulate a huge number of moving particles on the cpu, that are then used to compute a density function, store it in a 3d texture and transfer it to the GPU each frame. Finally I render either using marching cubes or raycasting on the GPU. This is slow as hell :)
So I thought of creating the scalar density field on the GPU, thereby only transfering the particle positions (and the nvidia example adresses this)
Could it be more efficient, however, to render meshes of spheres around the particles and then use this info directly for isosurface creation (using depth peeling or something)?
Thanks for your answers so far!
@ruysch: This looks pretty much exactly as what I was looking for, thank you!
The more I think about it, though, the more unsure am I about whether this is the right approach. My data is largely sparse and I only need rendering of the frontmost isosurface.
What I want to do (in the end) is to transfer a lot of the computation that happens now on the CPU and requires a lot of texture data transfers to the GPU.
Right now, I simulate a huge number of moving particles on the cpu, that are then used to compute a density function, store it in a 3d texture and transfer it to the GPU each frame. Finally I render either using marching cubes or raycasting on the GPU. This is slow as hell :)
So I thought of creating the scalar density field on the GPU, thereby only transfering the particle positions (and the nvidia example adresses this)
Could it be more efficient, however, to render meshes of spheres around the particles and then use this info directly for isosurface creation (using depth peeling or something)?
Thanks for your answers so far!
> to render meshes of spheres around the particles
This sounds as a cheap way of getting started with the ray marching, start rays where the mesh starts and march through the volume. There are tons of papers on volume rendering, heres something to get you started:
http://old.vrvis.at/via/resources/course-volgraphics-2004/course28.pdf
Also you might look into PBO texture updates, to speed up transfer from CPU side ram to GPU video memory. http://http.download.nvidia.com/developer/Papers/2005/Fast_Texture_Transfers/Fast_Texture_Transfers.pdf
This sounds as a cheap way of getting started with the ray marching, start rays where the mesh starts and march through the volume. There are tons of papers on volume rendering, heres something to get you started:
http://old.vrvis.at/via/resources/course-volgraphics-2004/course28.pdf
Also you might look into PBO texture updates, to speed up transfer from CPU side ram to GPU video memory. http://http.download.nvidia.com/developer/Papers/2005/Fast_Texture_Transfers/Fast_Texture_Transfers.pdf
"The more I think about it, though, the more unsure am I about whether this is the right approach. My data is largely sparse and I only need rendering of the frontmost isosurface."
Well, your doubts are valid:
If you need volume rendering just to visualize the result of your modeling and it is not the prime focus of your research I would not advise you to get in “volume rendering” development mess. It is very complex and time consuming to master a descent VR engine unless the crappy & slow one is fine with you. There is no descent GPU VR (according to my standard), once data size and size of projection plane is getting bigger and/or interactive rendering quality is set higher the interactive speed is rapidly deteriorate down, way below of interactive rate. Bottom line, GPU VR is not scalable at all, mostly because GPU SIMD architecture is incapable to adapt/change the code path to take advantage from local property of data; once each ray has unique code path driven by local property of data along ray GPU is totally inferior comparatively modern multi-core CPU. Therefore, modern GPU may provide descent VR performance only for relatively small volumetric data. Probably, the best GPU based volume rendering, I'm aware about, is Voreen. There are new multi-core CPU based VR engines with excellent scalability; the major performance differentiation between CPU based engines is the threshold it takes over GPU. Currently, this threshold for the best from two camps is around 512x512x512 (16bit data) 700x700 view-port and frame rate ~8 FPS for IC with interactive sampling density along ray 8+ samples per cell and rendering hardware Dual E5620 4GB vs, desktop with dual SLI GeForce GTX 480 (Fermi) (this estimation is very conservative and precocious; probably 256x256x256 cube is more accurate). The best CPU based VR has logarithmic scalability once GPU suffers from cubic dependency for brute force texture mapping (even it's greatly speeded up by hardware circuits yet it can not change its cubical nature); admittedly, it can be significantly improved via adaptive sampling of small texture bricks but complexity of such development mess is very high thus practically so-far it remains in domain of research not in domain of product development (I would love to be proved wrong).
Stefan
[Edited by - stefanbanev on July 3, 2010 9:37:58 PM]
Well, your doubts are valid:
If you need volume rendering just to visualize the result of your modeling and it is not the prime focus of your research I would not advise you to get in “volume rendering” development mess. It is very complex and time consuming to master a descent VR engine unless the crappy & slow one is fine with you. There is no descent GPU VR (according to my standard), once data size and size of projection plane is getting bigger and/or interactive rendering quality is set higher the interactive speed is rapidly deteriorate down, way below of interactive rate. Bottom line, GPU VR is not scalable at all, mostly because GPU SIMD architecture is incapable to adapt/change the code path to take advantage from local property of data; once each ray has unique code path driven by local property of data along ray GPU is totally inferior comparatively modern multi-core CPU. Therefore, modern GPU may provide descent VR performance only for relatively small volumetric data. Probably, the best GPU based volume rendering, I'm aware about, is Voreen. There are new multi-core CPU based VR engines with excellent scalability; the major performance differentiation between CPU based engines is the threshold it takes over GPU. Currently, this threshold for the best from two camps is around 512x512x512 (16bit data) 700x700 view-port and frame rate ~8 FPS for IC with interactive sampling density along ray 8+ samples per cell and rendering hardware Dual E5620 4GB vs, desktop with dual SLI GeForce GTX 480 (Fermi) (this estimation is very conservative and precocious; probably 256x256x256 cube is more accurate). The best CPU based VR has logarithmic scalability once GPU suffers from cubic dependency for brute force texture mapping (even it's greatly speeded up by hardware circuits yet it can not change its cubical nature); admittedly, it can be significantly improved via adaptive sampling of small texture bricks but complexity of such development mess is very high thus practically so-far it remains in domain of research not in domain of product development (I would love to be proved wrong).
Stefan
[Edited by - stefanbanev on July 3, 2010 9:37:58 PM]
> I would not advise you to get in “volume rendering” development mess
I wouldnt be discourged, once you get your mind wrapped around the problem its all just a matter of understanding the algorithm you wish to implement and writting the code. If you have a fair understanding of the GPU it shouldnt be too hard for you to implement a gpu volume renderer.
If you do get any problems you can always post here at gamedev
I wouldnt be discourged, once you get your mind wrapped around the problem its all just a matter of understanding the algorithm you wish to implement and writting the code. If you have a fair understanding of the GPU it shouldnt be too hard for you to implement a gpu volume renderer.
If you do get any problems you can always post here at gamedev
@stefanbanev: Thanks for your answer and I agree if this were for my research I'd pick some third-party software and use that. This whole thing is just for fun, or more specifically, I want to quit my research path at university (I'm postdoc in physics) at some point in the future so I'll try to get my hands dirty with some somwhat complex stuff (to show off ;) ) and improve my coding skills.
@ruysch: I actually have an implementation of a raycaster already, following the method of drawing the front and backfaces of a colored cube (in fact I have animproved version where I draw only the "real" bounding volume, with empty space skipping and ray refinement). The results are really neat and maybe I'm going to post some pictures of a physics QM simulation soon, but as I said in the original message, it's not really suited for animations in my opinion. Every field has to be preprocessed heavily (normals etc) for interactive framerates.
I have thought about this problem over the past few days and now my idea is to compute the surface from implicit representations. What I'm going to do is:
1. Draw ellipsoids to approximate the bounding volume from their paramteric representations. A very fast and high-quality approach is to use point sprites and render the depth values in the fragment shader.
2. Use depth peeling to determine ray intersections with bounding volume. Can also be done in the shader.
3. With the intersection points determined in 2. one knows the start depth values for the ray and can now compute the iso surface.
I'll let you know how it goes.
Thomas
@ruysch: I actually have an implementation of a raycaster already, following the method of drawing the front and backfaces of a colored cube (in fact I have animproved version where I draw only the "real" bounding volume, with empty space skipping and ray refinement). The results are really neat and maybe I'm going to post some pictures of a physics QM simulation soon, but as I said in the original message, it's not really suited for animations in my opinion. Every field has to be preprocessed heavily (normals etc) for interactive framerates.
I have thought about this problem over the past few days and now my idea is to compute the surface from implicit representations. What I'm going to do is:
1. Draw ellipsoids to approximate the bounding volume from their paramteric representations. A very fast and high-quality approach is to use point sprites and render the depth values in the fragment shader.
2. Use depth peeling to determine ray intersections with bounding volume. Can also be done in the shader.
3. With the intersection points determined in 2. one knows the start depth values for the ray and can now compute the iso surface.
I'll let you know how it goes.
Thomas
Quote:Original post by ruysch
> I would not advise you to get in “volume rendering” development mess
I wouldnt be discourged, once you get your mind wrapped around the problem its all just a matter of understanding the algorithm you wish to implement and writting the code.
Well, it's apparently true for known_public algorithms. In case of volume rendering the practically relevant known_public algorithms have a cubical time complexity; even Fourier volume rendering has N^2*Log2(N) time complexity, it is not by chance remains in PhD domain. GPU is good for brute force VR therefore its scalability sucks; smart adaptive algorithms is not well suited for GPU due to its SIMD limitations and it is one of reasons why adaptive VR algorithms remain in an uncharted public territory besides, such algorithms are really difficult to implement. There are several proprietary CPU based VR renderings with logarithmic time complexity so they take over GPU VR above some size threshold and this threshold is rapidly going down with multi-core AMD/Intel war. To keep up GPU should maintain cubic grow of number transistors or eventually to become an efficient MIMD machine as i7/Opteron.
Stefan
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement