Terrain tessellation

Started by
4 comments, last by MJP 11 years, 3 months ago
Hi, I'm in the process of implementing tessellated terrain for our engine. Everything works sort of okay already, except there's no frustum culling and the LOD could be improved. As I was looking for ways to improve the tessellation I found a paper called "DirectX 11 Terrain Tessellation" by Iain Cantlay. It has lots of interesting ideas about tessellation, but unfortunately there doesn't seem to be any source code available of the implementation. Does anyone know if the source code can be downloaded somewhere?

In the paper frustum culling has been implemented by dividing the terrain into multiple vertex buffers, which are then individually culled and then rendered using instancing. Wouldn't it also be possible to use the hull shader to cull individual patches? Would multiple vertex buffers be the better choice of these two approaches since our application is GPU bound and is likely to remain that way?
Advertisement
Source for that paper is available here: https://developer.nvidia.com/nvidia-graphics-sdk-11-direct3d "TerrainTessellation".

In the paper frustum culling has been implemented by dividing the terrain into multiple vertex buffers, which are then individually culled and then rendered using instancing. Wouldn't it also be possible to use the hull shader to cull individual patches? Would multiple vertex buffers be the better choice of these two approaches since our application is GPU bound and is likely to remain that way?


What you must think first is Graphical hardware is very so similiar with water..

You never put unnecessary data into that stream..

you must cull out all data you will not draw :)

It's so basic and so important

Actually on current superfast GPU can render very small portion with good realistic rendering qualty..

Beauty is only skin deep , ugly goes to bones

World's only 3D engine tunner and 3D engine guru.

and real genius inventor :) but very kind warm heart .. and having serious depression for suffering in Korea

www.polygonart.co.kr ( currently out dated and only Korean will change to English and new stuff when I get better condition :) sorry for that)

The sample doesn't work on my ATI card

Source for that paper is available here: https://developer.nv...sdk-11-direct3d "TerrainTessellation".


Thanks! I never seem to find anything from Nvidia's website.


What you must think first is Graphical hardware is very so similiar with water..

You never put unnecessary data into that stream..

you must cull out all data you will not draw smile.png

It's so basic and so important

Actually on current superfast GPU can render very small portion with good realistic rendering qualty..


I implemented basic culling of patches in the hull shader, but it didn't have any noticeable effect on the frame rate of our application. I think it's mainly because our terrain isn't very large and we are mostly bottlenecked by the pixel shader stage anyway.

The next possible optimization that I've been thinking about involves using the stream output to save the tessellated terrain into a buffer, which could then be reused whenever the terrain is rendered again during the same frame. Currently the terrain is tessellated 3 times per frame for different purposes, and it doesn't seem to make a lot of sense to me. Of course saving the tessellation results would increase the GPU memory usage and would probably involve some other overhead as well. And considering that we are already mostly bottlenecked by the pixel shader stage it might not change the frame rate in any way at all. Does anyone have any idea how this kind of situation is usually handled in games?
I haven't tried stream out with tessellation, but I really doubt it would be worth it in the general case. I've used it for skinned meshes so that they don't need to be re-skinned multiple times per frame, and you you need to re-use the data a lot of times (at least 4x or so in my experience) for it to actually be a win in terms of performance. For tessellation I would guess that the situation would be even worse, since one of the (potential) benefits of using tessellation is not having to waste bandwidth reading in lots and lots of vertices.

This topic is closed to new replies.

Advertisement