Jump to content
  • Advertisement
Sign in to follow this  
Moehammered

Voxel Theory & Practice

This topic is 2250 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Now I do apologise in advance if I'm repeating a topic or thread, but I've used the search function and only find people talking about detailed aspects of Voxel Theory or rendering, and I unfortunately cannot understand it clearly enough. Reason being I keep trying to think how to implement it and cannot understand it properly.

So what I do understand:
- A voxel is a 3D form of a pixel, in the sense it stores data relative to its position in the world(on screen)

Now i feel like that is wrong as well but I cannot understand implementing it. Would I make something like a voxel structure that holds position and block type or something along those lines?
I'm having trouble understanding the concept of it I guess, really am confused about this specific topic.

E.G.

struct Voxel
{
float x, y, z;
char type;
}


I'm very lost on this topic and I really am interested in it but cannot seem to grasp the concept of it firmly and confidently enough. :)
Also the wiki page I've read and this guys explanations I've read but he talks a lot about Raycasting more than Voxel theory.

Share this post


Link to post
Share on other sites
Advertisement
Typically you would store it in a 3D texture or buffer (depends on what technology you are using) and assign a location in space for the entire volume. Usually it's not practical to store the x,y,z location for each voxel individually as it's to much waste of space.
So you would have something like:


struct VoxelVolume
{
float X, Y, Z; //Location in world coordinates of the entire volume
float SizeX, SizeY, SizeZ; //Size of the volume in world coordinate units
int Width, Height, Depth; //Size of the volume in number of voxels (in each direction)
float voxelValues[Width][Height][Depth]; //The actual values of the voxels
}



This is just a basic datastructure. In order to render the volume you could cast a ray for each pixel that you draw on the screen and sample the volume at some fixed points along the ray and accumulate the value until you reach a certain threshold. There are also other volume rendering techniques, this is just an example so you can understand the basic idea.

Share this post


Link to post
Share on other sites
A pixel is a sample of an 2D image, a voxel is a sample of a 3D (or higher) image. Representation depends on how sparse your data is, but you might just have a multi-dimensional array (at least conceptually) in the same way as you might just have a 2D array of pixels. The type of the elements depends on what your voxels are samples of - e.g. they might be bone density, or they might air quality, or colour, or anything else, or some combination of these.

Share this post


Link to post
Share on other sites
Voxels have an implicit representation in an array in the same way as pixels are in a 2D image. Therefore, the [x, y, z] coordinate doesn't need to be stored, you generate them and use them as a look-up index in a 3D array, and fetch the colour value from the array cell, aka. the voxel.

However, volumetric data can consume quite a lot of memory. If your volume data is sparse, you'd be wasting memory on invisible voxels. Therefore, one could represent opaque voxel data with explicit coordinates (along with colour) and discard the transparent voxels, and potentially save on memory allocation in the process. Unfortunately this approach comes at a cost, the implicit relationship of neighbouring voxel entries in the 3D array is lost, as you can no longer use a simple table lookup to fetch voxels, making table queries more expensive.

Share this post


Link to post
Share on other sites
hmmm, so I've done some reading. Now I think I get it a little, would this be correct:

So say I have a 3D array of 'colours' (similar to the color values of pixels in a 2D array) and I simply want to draw a cube in 3D space using the volume data available(the 3D array) than would it be like
  • Grab 8 Points in 3D space
  • find the volume data using these points in space by dereferencing the 3D array with these vertex locations
  • construct the vertices using these points
  • assign the colors of these vertices via the data stored at those locationsI think I'm starting to grasp it a bit

Share this post


Link to post
Share on other sites
if you split the world into chunks of 256x256x256, you only need byte values for the individual voxel positions.

What your proposing works fine, ive done it myself.

Share this post


Link to post
Share on other sites
I've been working on a very long blog post about representation structures for computer graphics and simulation. smile.gif
I'll let you have a taste of my draft...

1.3 Sparse Voxel Octrees and Uniform Instancing

Some brilliant graphics programmers are experimenting with an attractive new way of representing graphical data, using "sparse voxel octrees". So uh, what is that?

SvoJonOlick.jpg.scaled700.jpg

Before I waste my time, I recommend you do some searches on Sparse Voxel Octree technology and look at real examples, too.


Henceforth I will abbreviate the term with 'SVO'

Anyway, the most notable among these frontier programmers are Jon Olick, Cyril Crassin and Branislav Síleš, (unassociated with eachother) each who have demonstrated a variety of impressive uses and potential with the technology.

Ultimately, each of their SVO implementations are only intended to be used in hybrid applications, all of which depend on polygon meshs to represent the dynamic components within scenes, at least. The picture shown below is from Cyril Crassin, who worked on a technique which uses SVO technology to approximate triangular geometry at real-time speeds. The dynamic SVO-geometry approximation can be used to cheaply simulate ray-traced effects such as local reflection (as demonstrated in the Crysis 2 Dx11 Ultra-Upgrade) or a variety of other effects, such as fluid simulation and dynamic world path finding.


SiggraphRepImage.jpg.scaled700.jpg


Now, I'm only sixteen, but I remember playing a lot of Super Mario World back in the 90's (yeah, I could barely talk). Many games during the time used simple tile sets for representing and visualizing levels, which was almost the only way to do anything, really.

mariolink.jpg.scaled700.jpg

Although an old technique, there's some very popular games of today that are still similarly based on this uniform instancing format i.e. Runescape and Minecraft. Rather than sprites, Runescape presents 3D models set about a regular planar map, while Minecraft uses textured cubes (or graphic voxels) in a "volumetric" world.

General-Runescape-HUD.PNG.scaled700.png


SVOs don'tuse this kind of instancing. You might rather consider voxels as instances, however.

This variety of representational techniques are all based in a strict Euclidean-space uniform format, either by square or cubic units. They're essentially beneficial from a simplified geometric aspect, giving their advantages over more arbitrary (but also more demanding) primitives. Of course however, this can be somewhat restrictive.

Atomontage.jpg.scaled700.jpg

Before I go any further, I'll decently introduce the actual advantages that graphics programmers recognize with these geometrically uniform structures.

  • Essentially, they're dedicated to the principle of instancing, making them ideal for intelligent streaming and compression.
  • Planar and spacial units (i.e. squares and cubes) are analogues to linear units. When axis aligned, these primitives are incredibly fast for calculating intersections and other computations which are usually pricey to do. This enables many advanced physics and lighting effects.
  • In particular, sparse voxel octrees enable volumetric data with more practical means of representation, by eliminating the need to store the nil of vacuums. Also, the nature of SVOs makes them mechanically principled for intelligent level-of-detail.

    Steady advancements in SVO technology are trending to reduce natural voxel aliasing even while increasing the magnitude of content.
    [/quote] Edited by Reflexus

Share this post


Link to post
Share on other sites
I'm trying to implement the sparse voxel octree structure method described in the Crassin Gigavoxels thesis using the compute shader in dx11.
I understand that every "brick" (the lowest level voxels that store the actual color information) can be stored in a texture3d.

He mentions that the actual nodes are stored separately in a 1D gpu linear cache, which I assume would just be a simple 1-dimensional array?

In terms of storing the actual node bounds, is the standard way to store the xyz coordinates of the corners of each node?
The way I'm currently thinking of doing it is to take the minimum and maximum coordinates of a large bounding box, split it into eight boxes and storing the minimum and maximum coordinates of each of these eight boxes and recursively split and store in this manner.
Then I assume that I add each minimum and maximum coordinates to the 1D array as I recursively split, and store the stride in a separate index array, which I could then use to retrieve the data at a later stage.

Has anyone else been able to implement a sparse voxel octree structure in the gpu? How are you storing your structure?

Share this post


Link to post
Share on other sites
So I think there may be some confusion over a voxel vs a 3D texture location.

A voxel does not nessesarily have to do anything with texels. A voxel is simply a cubical "chunk" of 3D space, usually represented by 3D coordaintes (could be corner, center of voxel, etc) and an edge length. You DON'T need 3 numbers to represent width, height, and depth for CUBICAL voxels, you only need one, the edge length. Remember, all edges in a cube are the same length. You will most likely use 3 numbers for index location though (assuming your voxel is in a grid), or X, Y, Z location of corner or center of the voxel.

3D textures is a 3-dimensional array of color samples stored in memory (most likely on the GPU). If you wanted to make your voxel data correspond to a single 3D texture data, then you would scale you voxel so that your entire voxel data lies within [0, 1] and use the transformed coordinates of each voxel corner as texture coordinates. But this doesn't have to be the case, you could scale your voxel data to span 2, 3, 1000 3D textures. The choice is yours, just don't think voxels are always the same as 3D texels, because some people don't even use voxels with color data.


I'm trying to implement the sparse voxel octree structure method described in the Crassin Gigavoxels thesis using the compute shader in dx11.


A new book came out recently entitled OpenGL Insights in which Crassin has written a chapter on how to generate the voxels on the GPU, and how to generate an SVO on the GPU. The chapter is actually in PDF form on the website as one of their sample chapters free to download! ( http://openglinsights.com/ ) look for chapter 22 PDF. It may help you in your implementation as I found it very clear and consise to read.


The way I'm currently thinking of doing it is to take the minimum and maximum coordinates of a large bounding box, split it into eight boxes and storing the minimum and maximum coordinates of each of these eight boxes and recursively split and store in this manner.


You would not store the minimum and maximum coordinates in the nodes. The span of the node is inferred implicitly from the level of the node and the position ndx of the node with relation to its parent node. I would refer you to read an excellent paper entitled Efficient Sparse Voxel Octrees by Laine and Karras in which they describe exactly how the data structure is to be implemented as well as traversed. Their method is not for the GPU , but the same ideas translate to the GPU and is referenced by Crassin in his paper as being the basis for his data structure. Edited by scyfris

Share this post


Link to post
Share on other sites
Thanks scyfris - you've really clarified it for me a lot. I've actually got the OpenGL Insights chapter, and I also found the Laine/Karras source code for their paper - I just need to find their paper.

I guess I'll need to read into the fundamentals of voxels and octrees first to gain an understanding.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!