Terrain sculpting

Started by
8 comments, last by RobM 9 years, 2 months ago
I'm just playing about with sculpting my terrain. Without going into too much detail, it's a fairly large terrain (8192x8192) and is based on geomipmapping. I use vertex textures for the heightmaps. I have two modes, streamed and non-streamed. When streaming, I load in 512x512 chunks when they get closer to the camera and use lo-res 32x32 chunks for areas further away.

To sculpt the terrain, I lock the vertex texture and adjust the values around the terrain circle selection area, vertex by vertex. This does work but is very slow and barely usable. When in non streaming mode, I'm locking the whole vertex texture which I assume pulls that whole texture into system mem every frame to adjust the values and then sends it back again - not ideal.

I tried using a rect with the surface lock but this didn't speed it up at all.

I'm only using a 16x16 selection area to sculpt the terrain at the moment but I intend this to be much bigger which means it'll probably be even slower.

Are there any other methods I'm perhaps overlooking?

The other option is to lock and change the heights of the streamed in vertex textures but that seems fiddly to me.

Thanks
Advertisement

If there's a maximum sculpt size you could write the updated values into a smaller texture and update the full size texture on the GPU. You can render the smaller texture onto the larger texture. You would bind the full-size texture as a render target. The vertex shader could use a screen quad and an orthogonal matrix projection to position the rendering such that you update only the part of the full-size texture that is being modified. The pixel shader would sample from the smaller update texture and be written out to the larger texture. That would cut down on GPU bandwidth a lot, and I think it could easily handle sizes larger than 16x16.

You could go a step further if you wanted and do all of the updating on the GPU. You could get fancy with render targets and write some shaders to make the modifications. Then all you would have to send over to the GPU to update a texture is an operation (up/down I presume), a position, a delta-time, and a strength. Distance strength-falloffs could be done in the shader too. This would probably be faster than doing the updates on the CPU, heh. You would ultimately have to read the data back from the GPU if you want to save the modified values. You would just have to make sure the read-back to the CPU happens after a person is done holding down a modification to avoid some render stalls.

I can help you plan this more depending on what you want to do. Are you using GL or DX?

Thanks for the post. I considered something similar early on (rendering the stamp'shape' to the vertex texture) but discounted it due to not thinking it through properly. I hadn't considered that you can do the fall offs and calcs in the pixel shader - that would work really well.

So I use the heightmap vertex texture as a render target, render the sculpt shape to it, then use it again in my terrain shaders. I'll give it a go, I have the ability to test this fairly quickly.

I'm using DX9 by the way.

Thanks
I am, of course, assuming you can use a vertex texture (floating point texture) as a render target...

If you're on DX9 you won't be able to read/write to the same texture. You will have to ping-pong the modifications between two textures I think. On the first pass you bind the large-texture for reading and an update texture for writing. You read values from the original, calculate the modified values, and then write the updated values to the copy. On a second pass you would bind the updated texture for reading and the main texture for writing. Then you copy the updated values back onto the main texture. Again, if you work up some vertex shader maths to update specifically the areas that have been changed these two copies will be really quick. Again if there is a maximum brush size you can limit the size of the copy texture and map the smaller texture onto the larger one for the copy-back pass.

You can usually do a single update with an additive blend and avoid the ping pong.

To raise or lower just additive blend in the brush texture.

To do a set elevation just do a normal render with blending again and set dest to be 0 and source to be your fixed elevation.

You only have to ping pong if you really need to read the texture but for most brush operations you won't.
But by thinking in blend terms you actually are reading in the texture through the use of your blend controls.

So if you format your output alpha correctly you can blend/read in the original value.

When I built my terrain system, I also used a texture heightmap to determine vertex heights. However, I considered the heightmap values to be an initial starting position for the terrain height values, not the definitive values. After the terrain was loaded, I was free to discard the heightmap textures. If I needed to modify the vertex positions, I would do that within the terrain's internal data values. If I need to save the terrain height, I could easily generate a new height map texture and export it to disk. You could consider this approach if you aren't doing it already.

Also: I think you should try to measure the performance of your code. It would help you a lot if you could isolate exactly where your performance slows down so that you aren't making guesses at your optimization needs. It can be something as simple as starting and stopping a stop watch to count ticks between a code block. Until you do this, we can only guess at what's causing slowdowns.

When I built my terrain system, I also used a texture heightmap to determine vertex heights. However, I considered the heightmap values to be an initial starting position for the terrain height values, not the definitive values. After the terrain was loaded, I was free to discard the heightmap textures. If I needed to modify the vertex positions, I would do that within the terrain's internal data values. If I need to save the terrain height, I could easily generate a new height map texture and export it to disk. You could consider this approach if you aren't doing it already. Also: I think you should try to measure the performance of your code. It would help you a lot if you could isolate exactly where your performance slows down so that you aren't making guesses at your optimization needs. It can be something as simple as starting and stopping a stop watch to count ticks between a code block. Until you do this, we can only guess at what's causing slowdowns.


Yes, I did manage to do some profiling and it is the locking/unlocking of the large texture that was the problem - this never really felt like a good solution to be honest.

I'm in the process of changing how it works to use the alpha blending to a render target but I've come across a few issues that I need to iron out.

I've created my vertex texture as a render target which seems to stop me from being able to lock and write to it (because its not created with usage dynamic) - can I OR those two together I wonder? The reason I need to write to it is when loading the initial heightmap from disk - I guess I could just render the loaded heightmaps to it initially but that feels like a workaround somehow. Even if I don't load anything into it and return a value in the pixel shader, nothing happens to the heights so I've got a bug somewhere,

This new approach definitely feels more like it though (once I've got it working), I can use any number of fancy calcs in the shader to sculpt the terrain.

Seeding the render target from another texture source isn't that uncommon. Its sometimes called staging.

So you would load create a dynamic texture assign the vertex data then do a render into your texture. This lets you keep the access of the render target optimal since its doesn't need cpu write access.

You can also investigate using this with a smaller texture and then do multiple draws into your larger map.

I have seen some, I think, geomimapping stuff that only needs to update the old regions. So as you move you don't have to update the whole full res texture due to the wrapping nature of the texture.

Otherwise you will probably pay the same cost to lock and map the a high res staging texture.

Seeding the render target from another texture source isn't that uncommon. Its sometimes called staging.

So you would load create a dynamic texture assign the vertex data then do a render into your texture. This lets you keep the access of the render target optimal since its doesn't need cpu write access.

You can also investigate using this with a smaller texture and then do multiple draws into your larger map.
I have seen some, I think, geomimapping stuff that only needs to update the old regions. So as you move you don't have to update the whole full res texture due to the wrapping nature of the texture.

Otherwise you will probably pay the same cost to lock and map the a high res staging texture.


Thanks, this is what I'll do next I think. I've managed to get the new approach working and it runs a lot smoother and faster although, strangely, sculpting continuously every frame seems to max my frame time out to exactly 20ms, with no fluctuations. Not something I'm going to worry about too much yet, but I do have other things to compute every frame like updating the vertical extents of my quadtree, which may take some time.

Also the terrain normal map. This is another issue as currently my normal map is also the size of the terrain map. I need/want to have normal detail at long distances regardless of how many triangles are used in far-off patches (this game based in the mountains). What I might end up doing is using regular diffuse textures for distance normal detail - I'm not too fussed about the sun moving.

This topic is closed to new replies.

Advertisement