unique voxel world idea, with a twist

Started by
5 comments, last by FreneticPonE 11 years, 9 months ago
I was just thinking about unique voxel environments again, and I got this idea based apon raytraced displacement mapping.

Say you have a 1024x1024 chunk of terrain (in a small *detailed* space, to fit say 4x4 men), and you render an inverse cube at its location and draw worldspace coordinates.

If you start from the eye position and trace a ray to the back of the cubes worldspace coordinates you make a ray which can intersect a heightmap, and give you a voxel output.

So, then add 5 more layers of heightmaps (a bottom a top a bottom a top a bottom and a top) and you can have overhangs, with 6 layered displacement maps which are raytraced.

What you have now is a model which has no side information, so detect when you hit a side (a position thats between a bottom and a top) and access one of 4 normal maps for what side it is, and draw the side detail for that chunk.

So, then just add more than one chunk, maybe even stream unique chunks for a large map, and you have another system for a voxel environment, do you think this is a good idea?

Youd make the map by placing transformed hipoly kit models, I imagine, with additions and subtractions.
Advertisement
So if I understand correctly:

- Ray-march from the eye position to points on the back of a cube.
- Sample a heightmap at every step along the ray and fill in the height of the heightmap when the ray first intersects.

Sounds a lot like parallax offset mapping, and will have the same problems, mainly prohibitively large texture sampling requirements and the "paper layers" artifact along the edges.
yeh thats it, if the gpu just makes it, its worthwhile, but its a lot of samples, yes... i guess the new nvidia cards would be a good option for it, with their killer pixel shader performance. youd need a lot of samples to get rid of the paper layer artifacts, Id be pushing the samples to the limit.
Sounds something like this? http://www.ericrisser.com/stuff/Rendering3DVolumesUsingPerPixelDisplacementMapping.pdf

Also looak at True Iposters in GPU Gems 3:http://http.developer.nvidia.com/GPUGems3/gpugems3_ch21.html

yeh thats it, if the gpu just makes it, its worthwhile, but its a lot of samples, yes... i guess the new nvidia cards would be a good option for it, with their killer pixel shader performance. youd need a lot of samples to get rid of the paper layer artifacts, Id be pushing the samples to the limit.


Pixel shaders can run as fast as you like, but once the texture bandwidth is saturated, you're stuck there instead. Plus not everybody has the latest nvidia, some of us are still using Intel X3100sad.png *crys*
Thanks for those links PolyVox, checking them out.
The problem with this, as with all "unlimited detail" voxel stuff is the sheer amount of information you have to store, and I don't see your proposed method particularly solving this. I did however forget about that True Impostor paper until now, it would be interesting to see if someone could get it working in a next gen game as a replacement for distant LOD models. It might work wonders for detailed forests or grasslands if it were faster.

This topic is closed to new replies.

Advertisement