GPU Tessellation

Started by
18 comments, last by ZealousEngine 16 years, 4 months ago
Incase you missed the siggraph video, the slides are still available... *because of the () you need to cut and paste the link it seems... http://ati.amd.com/developer/Eurographics/2007/Tatarchuk-Tessellation(EG2007).pdf The theory is pretty simple. Take a low res control mesh, and subdivide it at runtime on the gpu (based on distance form the camera, ect..). So if you wanted to render a patch of terrian, you would start with a low resolution control mesh, and 'add detail' by tessellating a similar 'simple' quad patch over and over. You then interpolate the heights/uvs to fillin the space between the control mesh points. This is how the xbox 360 game 'viva pinata' did their terrain. However, the paper doesnt give many detials on the actual implementation. Lets assume I have a simple quad, and I want to tessellate it as I move the camera closer. Heres what I dont get... 1.) How do I pass the control mesh 'info' to the 'tessellation' vertex shader? 2.) Assuming I can get the control mesh info to the tessellation shader, how does the tessellation shader itself work? How does it render multiple instances of a mesh in a single draw call? If anyone has a link to an actual implementation of GPU tessellation I would love to see it! Thanks for any help!
Advertisement
GPU-Based Screen Space Tessellation

http://www.tronte.com/content/SST_tromsoe_jmh_trh.pdf
A classmate of mine wrote a nice paper on GPU tessellation last semester. It looks pretty good, and has implementation details. There are references to actual project and solution files, but unfortunately those aren't available.
Viva Pinata uses per-edge tessellation factors and the built-in Xbox 360 tessellator hardware. You basically get the barycentrics and vertices in the shader, and you just fetch whatever you need there (the 360 gives you full control over fetching, so you're not restricted to declaring up front what you need as input, just fetch away). Compute the vertex based on the barycentrics (computed by tessellation hardware) and offset it by a heightmap and that's it.

The edge factors are computed in a separate pass using memory export to write from the shader to the a vertex buffer containing the per-edge factors. This is just a shader so you can do any math you want. For example you would fetch the control mesh edge vertices in question and compute the screen space length and then set the tessellation factor based on that.
Thanks for the links to those papers I will check them out now.

Quote:
Viva Pinata uses per-edge tessellation factors and the built-in Xbox 360 tessellator hardware


Youre not saying such a technique isnt possible WITHOUT such built in tessellator hardware are you? *That one atricle is talking about geometry shaders. Arent geometry shaders Dx10? Im trying to target Dx9 sm3.

*Btw, im a little fuzzy on the subject, but couldnt geometric instancing be used for tessellation instead of relying on geometry shaders?

[Edited by - ZealousEngine on December 8, 2007 9:45:35 AM]
As far as I can tell that presentation IS talking about specialized tessellator hardware, and indeed it looks like this will be required for DX11 (or 10.1 or whatever it will be called).
It can be done without the tessellator, but it might have better performance when you are rendering a curved surface(like NURBS, subdivision surface). The tessellator takes on some tasks thought to be done on CPU(tessellation) and relieves the CPU/GPU bus overhead.
So does anyone have any dx9, no tessellater hardware implementation details?
If terrain rendering, this article might be helpful: "terrain geomorphing in the vertex shader" from ShaderX 2 :)
Yeah but with terrain patch rendering you really seem to be doing 100% of 'what patches go where' calculations on the cpu. Im still curious to see how tesselation works with dx9 on say a character model.

This topic is closed to new replies.

Advertisement