is this technologically feasable?

Started by
36 comments, last by wodinoneeye 14 years, 11 months ago
You don't necessarily need a lot of voxels per mile, assuming the voxels are not used for rendering but only for game logic.
One voxel per cubic meter should be enough.
Advertisement
Quote:Original post by loufoque
One voxel per cubic meter should be enough.
Which, by my calculations, is 1,000,000,000 voxels per cubic kilometre [wink]

Your voxels have to be visualised in some way, whether by explicit tessellation (such as marching cubes), substitution of pre-created chunks (ala starcraft II), or direct raycasting. And when we are talking about large worlds, with a billion voxels per cubic kilometre, raycasting is probably the only feasible approach.

Unfortunately, there doesn't seem to have been a whole lot of research into raycasting voxel data on modern GPUs.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

There has been some research done. Here's a contributor from past Image of the Day.

http://voxels.blogspot.com/

He's doing 1024x2048x1024 = 2.14 billion voxels in realtime, so it's quite possible to accelerate it on the GPU.

This is with no LOD, which if thrown in you can probably render to the horizon within a ~2 billion voxel budget. Though as you can see the drawback with voxels is that they are blocky and you'll need ultra-fine resolution up close to make realistic worlds, or u would tessellate it realtime, but even then everything starts to look rounded.

I think a combination of dynamic tiling scheme in conjunction with GPU accelerated ray casting, you can make something which is competitive with the traditional polygon worlds. The advantage over the polygon soup model, would be its dynamic nature and the ease in which u can create procedural content for it.

-ddn

[Edited by - ddn3 on May 21, 2009 2:14:37 PM]
Quote:Original post by ddn3
There has been some research done. Here's a contributor from past Image of the Day.

http://voxels.blogspot.com/

He's doing 1024x2048x1024 = 2.14 billion voxels in realtime, so it's quite possible to accelerate it on the GPU.
That is pretty impressive - I missed that when I was looking into voxels.
Quote:This is with no LOD, which if thrown in you can probably render to the horizon within a ~2 billion voxel budget.
I never managed to come up with a satisfactory LOD algorithm for voxels - naive approaches such as mipmapping the voxel field give pretty atrocious results.
Quote:I think a combination of dynamic tiling scheme in conjunction with GPU accelerated ray casting, you can make something which is competitive with the traditional polygon worlds. The advantage of the polygon soup model, would be its dynamic nature and the easy in which u can create procedural content for it.
Not sure I get you there - as I see it, procedural content should be much easier with voxels, and the same for terrain modifications (as long as the accelerated render structures are not too static).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Your voxels have to be visualised in some way, whether by explicit tessellation (such as marching cubes), substitution of pre-created chunks (ala starcraft II), or direct raycasting.

Replacing voxels by pre-created meshes, just like 2D games used to do (tiles) is what I had in mind.
So that's Starcraft 2 way too?

As for direct rendering of voxels, I don't think it's there yet.
There is a lot of research on voxels however, since they allow linear time global illumination.

Quote:Not sure I get you there

I think he meant the opposite, and that those were the advantage over traditional polygon soup.
Quote:Original post by loufoque
Replacing voxels by pre-created meshes, just like 2D games used to do (tiles) is what I had in mind. So that's Starcraft 2 way too?
A better name for them might be '3d tiles', given that we usually prefer voxels to be small. But yes, if you look at some footage of StarCraft II, the terrain is formed of pre-built chunks. I am not quite sure (and very curious) how they blend between chunks of different types so effectively.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Yah I meant the opposite, blame it on posting and talking on the phone :D

3D tiling scheme using either traditional polygon data instancing or doing it all in voxel are both feasible approaches. From what I understand the next generation gpu/cpu can handle massive amounts of data and computation, so most of this can be generated procedurally at runtime.

Storing the world as a compressed voxel blob in memory, which is then rendered using a procedrual mixing scheme from base tile sets on the gpu or multi-core cpus.

Starcraft, they have so many artist on staff they can create several blending tiles for every combination of adjacent terrain pieces in the game (That's the brute force approach). Or they can use some sort of texture synthesis technology which has been available for several years and do it dynamically, but I doubt that.

So simulate the world you can run several version of LOD physics to dynamically reduce the CPU load. Simple physics everything being spheres and boxes, higher level physics move up on the chain to have constraints, joints, more physical primitives, and ultimately the highest level of physic can have polygon collision etc..

Good Luck!

-ddn
Ive done procedureally generated world terrain/population systems and you really cannot do it 'random' 'on-the fly' -- simply based on coordinates without a significant lack of cohesion for the result (when you are trying to make it look 'natural'). Natural has fairly complex interrelations bewteen adjacent features and even something as rudimentary as water bodies and river systems is quite complex enough. (full details up to ecosystems and societal construct are magnitude more complicated).

You need a seed map of the entire world to coordinate the creation of areas (giving their main themes/patterns). When you can build an 'area' at one time (data accessible/realized) you can coordinte its building by applying the nessessary rules that build cohesive patterns and culls out the impossible combinations.
Micro vs macro detail. The seed map can still be programatically/procedurally generated it just has to be build all at once when the proper rules can be applied to shape it.

Building an 'area', you need to fetch the adjacent areas and build up their details enough to get the edge micro data to adjust the immediate areas formation -- otherwise as you move into that new area it will clash with what you had just been in -- part of which will probably be in view still.

The formation/building has to be deterministic for features that will be remembered. And coordinated to be 'logical'/proper for the natural patterns expected.

Potentially you could have several steps of this seed map generation with the next level down being created fromthe previous levels generaliztion (coefficients) -- again with enough detail to set the patterns/themes covering a large enough space so that the current 'area' can be built from cohesive/coordinated patterns.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact

This topic is closed to new replies.

Advertisement