DX11 Hardware Voxel Terrain (your opinion)

Started by
10 comments, last by Hawkblood 9 years, 11 months ago

Now that I'm learning DX11, there are a few new options open to me using hardware (video card). I've been eyeing http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html and was wondering what experienced programmers think and know. My concerns are as follows:

-- how much control would I have during generation? What kinds of controls are there?

-- when the terrain is generated, how would I get info about the "height" of a specific location on the terrain?

-- could I implement LOD to this terrain? How?

Remember, I'm asking about DX11, not DX9.......

Advertisement

I would like some comments on this, so I'm bumping it to keep it alive......

Still? No one? No comments what so ever?

Now that I'm learning DX11, there are a few new options open to me using hardware (video card). I've been eyeing http://http.developer.nvidia.com/GPUGems3/gpugems3_ch01.html and was wondering what experienced programmers think and know. My concerns are as follows:

-- how much control would I have during generation? What kinds of controls are there?

-- when the terrain is generated, how would I get info about the "height" of a specific location on the terrain?

-- could I implement LOD to this terrain? How?

Remember, I'm asking about DX11, not DX9.......

How much control? Depends on your noise of course. You can do pretty amazing things with multiple noise steps and some pseudo randomly distributed precalculated models like e.g. trees. Youtube has some nice vids to that.

If you go for only perlin noise for example you have persistence, frequency, etc. as parameters to play with and your seed as a starting point.

When it is generated you dont need to know the info, do you? Otherwise: Voxelspace -> Worldspace calculations.

LOD... Thats tricky. Its of course possible e.g. by means of the Trans Voxel Algorithm as proposed by Eric Langyel.

http://www.terathon.com/voxels/

But the topic is not as trivial as it should be. I am fiddling around with it, myself. :P

Thanks for the reply. I'm looking at the link now. When I originally posted, I hadn't dealt with DX11 very long, so shaders in DX11 seemed too much like magic to me. I have a bit more understanding now, so I think the questions about control and getting the height are something I can get hold of..... For now, I'm going to soak up the knowledge in the link.

Wow. What a mess of a file. When I clicked the link for the "lookup tables" it came out as a hot mess. Here is a more coherent format:TransVoxel.txt

EDIT: I originally placed it in a code fragment, but the page takes forever to load that way. So I uploaded it as a .txt file.

[Mod edit] The copyright holder of that file just asked for us to remove it, saying you don't have permission to be redistrubuting it

Well its a set of lookup tables. I guess its not that important that it is super-readable and it serves its purpose well as is. tongue.png

After studying the file, I think it's mainly for voxel generation on the CPU. I'm wanting to know how to do it on the GPU (video card)..... I don't even know where to start with this. How do I *use* the GPU to generate and display the terrain?

I *think* I would have a shader (.vs and .ps) written up appropriately for this and what I would send as data is maybe texture(s), camera location, view matrix, projection matrix, and initially a "seed" value for the noise generator and then the GPU does the work of rendering what I "see"...... I don't know what I would need.

It is indeed for generation on the CPU but i guess it still applies for the GPU.

Problematic with this approach is btw that you would need to implement a volumebased physicsengine or a physicapproach on the GPU that works with your extracted meshes etc.

Its hard to do, i think, if you need physics, that is.

But i didnt implement any of those algorithms on the GPU and i dont plan to, so i cant help that much with that. I just wanted to point you to some maybe helpful things :)

Well, my implementation I did 2 years ago or so used only the basic marching cubes algorithm, and is by no means a reference on how to write good shaders (hehe) but what I did was to store the tables on the GPU, and then, for each terrain chunk to render, send a single integer to the vertex shader, which is interpreted as a bitfield that encodes one triangle of one cube in the chunk (so each integer corresponds to a unique triangle in the chunk). Then I ran the marching cubes algorithm in the vertex shader, which produced the set of triangles for this cube according to the density field (the seed and all that were uploaded in constant buffers) and selected the correct triangle based on the triangle index, and so it generated the terrain on the fly. So a lot of work was duplicated, but it ran on the GPU.

In later versions I made it so the terrain was generated only once in the geometry shader, without any wasted work, and also streamed back out to the CPU, so that I could manipulate it CPU-side as well (e.g. generate it as the player moves around and then potentially post-process) but then maybe a compute shader would be better-suited at that point. And I'm not sure it was faster. But it looked pretty decent with triplanar texturing (as in the GPU Gems article).

Anyway the basic idea is to set things up so that you can pass a set of cubes to the GPU (either implicitly or explicitly) and tell it to "sculpt" the terrain mesh for these cubes by running MC (or whatever variation you use) on it. From there, you can do plenty of optimizations, but that's beyond me as I didn't really study it in depth.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

This topic is closed to new replies.

Advertisement