[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 10 months ago

To "just add another child type" implies virtual inheritance, which adds 4 bytes (the typical size of a color) to every object. So where exactly have you saved versus just replicating the color in every single child?

It only adds data if there's a virtual function to be called. If you're traversing the tree from the top down, you don't need to call anything in the children, you just need to skip them. As far as I know SVOs generally use compression techniques based off just storing whether or not a child exists, which is why they are so efficient. It doesn't seem to be a huge stretch to expand that to use a parent's color data or not; ~6 bits total/voxel for static geometry if we double the amount I've heard is what is needed for an SVO (3bits/voxel).

It would get more complex for voxels that do have color data, but I'm not going to come up with a voxel rendering scheme off the top of my head without putting more thought into it. There's still no reason to store color data for every voxel.
Advertisement
Okay, assuming that's feasible, do you feel that it's realistic to assume that a large number of child voxels will share all lighting attributes with their parents? Do you think that would make for an interesting world? Do you think it's useful to have a high level of detail, such that when you zoom in all of a sudden everything is the exact same color?

Okay, assuming that's feasible, do you feel that it's realistic to assume that a large number of child voxels will share all lighting attributes with their parents? Do you think that would make for an interesting world? Do you think it's useful to have a high level of detail, such that when you zoom in all of a sudden everything is the exact same color?


Not only do I think it's realistic, I'd consider it the norm when you have that kind of geometry detail. Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.

Here's a simple example just pulled from google. If you look at the cliffs most of their diffuse data is just replicating shadows. With higher geometry detail you could easily get similar results with a single color.

Not only do I think it's realistic, I'd consider it the norm when you have that kind of geometry detail. Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.

Here's a simple example just pulled from google. If you look at the cliffs most of their diffuse data is just replicating shadows. With higher geometry detail you could easily get similar results with a single color.


No, you couldn't actually, because every single little pixel that's a slightly different shadow color from the one next to it would have to have its own normal. Every little pixel that's a slightly different grass or rock color from the one next to it would have to have its own color. How do your propose to get the detailed color variations if each fine-grained voxel is exactly like its neighbor?

Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.


And what does this even mean? This is absolutely false.

The vast majority of voxel applications don't do any shading, and therefore they don't need to store things like normals and binormals and specularity coefficients, etc. In games we do have to, unless you're suggesting voxelizing to the level of detail of actual atoms on a surface, and simulating physics-based light transport and scattering models.

Is that what you're suggesting?

No, you couldn't actually, because every single little pixel that's a slightly different shadow color from the one next to it would have to have its own normal. Every little pixel that's a slightly different grass or rock color from the one next to it would have to have its own color. How do your propose to get the detailed color variations if each fine-grained voxel is exactly like its neighbor?



Why do you need normals that are any different than what can be generated from the voxels themselves when you have sufficient detail to the pixel level? You do not need different diffuse colors to get different colors any more with voxels than you do with polygons. You just let the light and the detail do the footwork.

The only reason we have half the texture detail we do now is because we don't have the geometry detail we want. If we had the geometry detail we wanted, it stands that we wouldn't need quite so much texture/color detail.

Why do you need normals that are any different than what can be generated from the voxels themselves when you have sufficient detail to the pixel level? You do not need different diffuse colors to get different colors any more with voxels than you do with polygons. You just let the light and the detail do the footwork.

The only reason we have half the texture detail we do now is because we don't have the geometry detail we want. If we had the geometry detail we wanted, it stands that we wouldn't need quite so much texture/color detail.



Why do you need normals that are different from what can be generated from the voxels themselves? You say "let the light and the detail do the footwork". That implies you need surface normals. Where are you going to get the surface normals if they're not stored? Are you going to generate them by analyzing neighbors? That hasn't been shown to be practical.


Regarding color.. so in your world, surfaces are all completely monochromatic? They aren't like that in my world. Most objects aren't made up of a single compound. You're greatly underestimating the sheer number of voxels you'd need to represent a surface and have it not look like molten plastic.
Maybe a stupid question, wouldn't nice textures (not in the CS meaning) require interpolating between the voxels? I mean, we all know how crap textures (in the CS meaning) look even with bilinear interpolation. A lot of times mip-mapping isn't good enough either.

I can imagine that monochromatic stuff would look right without interpolation but textures (not in the CS meaning) seems to be different.

Or it isn't an issue? Are voxels interpolated anyway? Did my post made any sense written before going sleeping?

Why do you need normals that are different from what can be generated from the voxels themselves? You say "let the light and the detail do the footwork". That implies you need surface normals. Where are you going to get the surface normals if they're not stored? Are you going to generate them by analyzing neighbors? That hasn't been shown to be practical.

you don't need user specified normals period. The purpose of a normal is just to simulate light bouncing off a surface that doesn't exist. If the surface exists, there's no point in generating a normal that's any more complicated than just using the surface.

Regarding color.. so in your world, surfaces are all completely monochromatic? They aren't like that in my world. Most objects aren't made up of a single compound. You're greatly underestimating the sheer number of voxels you'd need to represent a surface and have it not look like molten plastic.
[/quote]
Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.

You severely underestimate how much of the color differences you see are just value changes caused by shadow or different shades of light if you think you can't inherit colors from parents as far as voxels are concerned. Look at how a jpg is stored. I mean really it's just RLE applied to volumes the same way you might apply it in a jpg or other image file. Why should we think volumes need to be any different? It's not like I'm talking about reinventing the wheel. Just sticking the same old wheels to a new engine.


Maybe a stupid question, wouldn't nice textures (not in the CS meaning) require interpolating between the voxels? I mean, we all know how crap textures (in the CS meaning) look even with bilinear interpolation. A lot of times mip-mapping isn't good enough either.

I can imagine that monochromatic stuff would look right without interpolation but textures (not in the CS meaning) seems to be different.

Or it isn't an issue? Are voxels interpolated anyway? Did my post made any sense written before going sleeping?

One of the most difficult topics I've seen actually. Cone tracing and other sampling methods work. Also simply relying on voxels to collapse their subtrees into their parents to store information is key. That is from very far away an object that is less than a pixel can merge the color of their main subtrees into a single color. As you move closer the ray only traverses into the first level grabbing the merged color. So in actuality the dataset is only 8 color values (assuming a subtree for the highest levels). This leads a lot of people to realize you don't need to load that much data to get visually amazing detail. It's the same theory behind not loading the highest mip level of a texture since the user can never get close enough to see it. Carmack discussed this technique actually in his 2011 Quakecon speech recently when he talked about how they performed a visibility test so they could lower the quality of a lot of textures the player couldn't get to. In the same way a space station that might be 10 GBs of realistic voxel data would stream the top nodes a la google images and it would look perfectly fine. This is where the idea of automatic level of detail comes from.

Anyway the mipmapping problem with voxel is an interesting one with a lot of approximate solutions. If you want an exact solution though imagine your screen then for each pixel there is a frustum emanating out with its faces adjacent to the faces of the adjacent pixel's frustum. Your goal is to find a way to pull back all the voxel data inside of the frustum while also discarding voxels that are behind other voxels. In a way it's similar to the optimal octree frustum culling algorithm. (That is the one that uses only addition pretty much and works with unlimited frustum planes. If you don't know what I mean implement this with a quadtree and you brain will explode with ideas). The caveat is that you start scanning front to back and subtract the frustum generated by voxels that you include. You clip and track the color of the shapes used to create the volume. It is an extraordinarily complicated algorithm that I myself have sketched out on paper. You end up getting back a square region that looks like a bunch of colored regions. You merge all the colors based on their area to get the final pixel value.

As an example if you looked and saw only two voxels in your pixels frustum then it might look like this:
frustumpixel.png
I colored the sides of one voxel differently so the perspective can be seen.

The nice thing about this is that you get amazing anti-aliasing especially if your voxel format defines infinite detail contour data. (That is you have subtrees that loop back around a few times to generate extra detail or nodes that define contours from a map in order to procedurally generate detail).

It's a fun topic with not very much research. A lot of the research papers you find though cover raytracing concepts. I wish someone would invest in making raycasting hardware if only to run via Gaikai or Onlive. :P

I recommend reading Laine's papers on SVO stuff.

How this relates to point-cloud rendering I have no idea. I assume they found an interesting algorithm that might be different than the normal raycasting/3DDDA stuff.

This topic is closed to new replies.

Advertisement